content top

Redefining presentations with Prezi

Redefining presentations with Prezi

As with many others I know, my de facto tool to date for creating presentations has been PowerPoint. It’s a great application, but it really hasn’t changed much over the years. Also, its support is largely targeted towards desktop systems which becomes limiting if you want to edit or view presentations on your Apple or Android mobile device. Luckily, there have been others innovating in this space. I recently came across a SaaS service for presentations called Prezi and wanted to share my initial thoughts after having used it for about a week. The typical presentation in PowerPoint or OpenOffice consists of a series of slides. You can add animations and such to build on top of that, but at the base it’s a serial set of content that you flip through. Prezi takes a completely different approach. Instead of traditional slides, you’re given a canvas where you can freely zoom in/out and pan around. Instead of a series of slides, you have a spatial playground where you can place content. Using space in this way allows you to provide context throughout your presentation since spacial locality infers a grouping of ideas. This may be easier to demonstrate than to describe. As an example of what a presentation using Prezi looks like, here’s one from Prezi co-founder Adam Somlai-Fischer on how to create a good presentation: Notice how right up front you get a complete picture of what the presentation is about and the high level components. You can achieve a similar effect with an outline slide in traditional presentation software, but I personally really like this approach. The ability to zoom in and out replaces the typical slide flipping action. Prezi handles all of the animation for you, and frankly I found it to be really slick and professional. High level summary and flashy example aside, the real question is how easy is it to use. First of all, the application is delivered as a SaaS model. There is a free tier, and then additional paid tiers that provide more flexibility around privacy settings, storage capacity, etc. There is no upsell in terms of core features, though, allowing you to give it a solid test drive with a free account. The benefit of the browser based SaaS delivery is that you can easily continue work on a presentation across systems. For me personally that’s a must have capability. It did take me a bit to unlearn old habits from the conventional slide world and move into the spatial world of Prezi. However, once I got the hang of it I found I really liked it. From a UI perspective the application is pretty intuitive. I did find...

Read More

Why you need a cloud strategy

Why you need a cloud strategy

Organizations often place significant time and resources into defining a business strategy and measuring execution against it. With the rapidly evolving landscape of computing software and hardware technologies, CIOs today need to proactively define and address strategies for their IT infrastructures as well. I often hear folks mention that they’re implementing a strategy to “adopt cloud”, but as I probe deeper I find that it is composed of near term goals that do not properly consider longer term implications. Just as a company develops a business strategy to align short and long term execution, any engineering team using or migrating to cloud computing must define a cloud strategy that addresses both near term needs and long term goals. In this article I’ll briefly discuss what I see as three of the key issues that such a strategy must attempt to address. Private vs. public infrastructures Public clouds are a common starting place when moving to cloud environments, especially for cash strapped startups. These solutions, whether IaaS or PaaS in nature, minimize the time between application development and deployment as you’re able to remove any overheads associated with building and operating your own infrastructure. From a financial standpoint, this also makes sense if the goal is to iterate on an application to determine viability and adoption as you’re taking advantage of the elasticity of public providers. However, at some point the cost of consuming resources continuously on a public cloud can out weigh the cost of running on a private infrastructure. Hence, from a pure TCO perspective it is necessary to have a plan for when (and how, as I’ll discuss in a bit) applications should be moved internally onto a private cloud. In addition to just cost, there will often come a time when for privacy, security, or performance reasons a private infrastructure may be more compelling than a public solution. In reality, over a period of time as your adoption of cloud grows and evolves, you’re likely to have varying needs based upon application tiers where the decision of private or public varies. Therefore, what’s really needed is a hybrid cloud plan that provides your organization the flexibility necessary to be agile and balance between private and public resources. To complicate things further, as you grow you probably want to leverage multiple public providers both for improved fault resilience and also to avoid vendor lock in. Workload portability From a technical perspective, there’s a significant gap between theory and practice when trying to implement a hybrid cloud strategy today. A key part of the problem has to do with portability. When developing an application for a public cloud, you will have access to a...

Read More

Putting the H in OpenStack

Putting the H in OpenStack

Next week the semiannual OpenStack Summit will be held in Portland. Given the significance of this project and its incredible growth over the past few years, I thought I’d write a quick post on some of my thoughts and analysis as a lead up to the summit. First and foremost, for those that may not be familiar with OpenStack, it is an open source IaaS cloud computing project distributed under the Apache License. It encompasses all of the capabilities necessary to deploy a fully functional cloud environment. In terms of implementation, OpenStack is composed of a variety of services: Nova – The core compute service that manages instances Keystone – An identity authentication and authorization provider as well as a service catalog Glance – The image service (e.g. AMIs, snapshots, etc) Cinder – The block storage service for OpenStack Swift – The object storage service for OpenStack Quantum – A networking service which exposes virtual networking abstractions Horizon – The web UI interface component I won’t go into detail on these in this post, but if you’re interested you can find a nice summary here. Next week’s summit comes after the recent release of OpenStack G (Grizzly) and will include a design summit component to plan for the next release OpenStack H (Havana). As you can probably guess, Grizzly was preceded by releases A-F (Austin, Bexar, Cactus, Diablo, Essex, and Folsom) making it the seventh release for the project. I’ve personally been working with OpenStack since Diablo and am continually impressed with how fast it has been able to grow while maintaining good quality. Why is it important? I mentioned earlier that OpenStack is a significant project, at least in my mind. Along with a cohort of open source cloud software including CloudStack (IaaS), CloudFoundry (PaaS), OpenShift (PaaS), etc, OpenStack is helping to enable an open cloud ecosystem that benefits both providers and end users. When talking “open” people typically refer to public availability of source code, but in reality it’s a lot more than that. First, OpenStack provides a common framework that allows openness in the form of interoperability for users. This enables scenarios where a company can leverage the same technology across its internal private cloud and a public infrastructure which is an extremely compelling story. In addition, open includes building a community for contributors and partners to jointly define the future of the project. OpenStack design summits provide a valuable venue for participants to come together outside of mailing lists and IRC channels to meet, debate, and resolve issues allowing the technologies to be driven forward by a cohesive community. High level pitch aside, the real question is how things have been...

Read More

An introduction to Amazon EC2

An introduction to Amazon EC2

Amazon was one of the pioneers in public cloud and had a basic IaaS offering in the market as early as 2006. Since then they’ve signicantly expanded their service offerings and capabilites. I won’t go into all of those in this post (though will gradually cover them), but instead I’ll cover a brief introduction to using the IaaS capabilities in Amazon EC2. Think of it as a crash course for absolute beginners with one caveat: Instead of a walk through using their web based management UI, I’m going to go through the process programmatically. To make this a bit easier, instead of directly making API calls to EC2 I’ll leverage the Ruby fog gem. Fog is a great library that implements many cloud APIs underneath. I’ll use it to handle the low-level API implementation, but will point out how it maps to the documented EC2 interfaces. Some prequisites/assumptions: You have a valid Amazon EC2 account setup (easy to create if not) You have a working Ruby environment on your machine (note fog officially supports 1.9.3, 1.9.2, and 1.8.7) By the end of this post you should: Have a basic understanding of how to use the EC2 API with fog Have a running instance of Ubuntu 12.10 in EC2 you can ssh into   Step 1: Basic setup First off you’ll want to install the fog gem if you don’t already have it. That should be as easy as: $ gem install fog Next, you’ll need to login to the Amazon account manager to create/retrieve your credentials. To do this you can go here. Find the credentials area that looks like the one below (creating a new test set if you wish), and jot down the Access Key ID and Secret Access Key.   Step 2: Fire up fog For this exercise I’ll use fog in Ruby IRB so it’s easier to step through and take a look at things as you’re going through on your own, but still reflects what you’d do in a stand alone script. You can start IRB and bring in fog as follows: $ irb > require 'fog' Next you’ll want to create a connection to EC2 by generating a Compute object with an AWS provider and your credentials from the Amazon portal: > compute = Fog::Compute.new({:provider => 'aws', :aws_access_key_id => ACCESS_KEY_ID, :aws_secret_access_key => SECRET_ACCESS_KEY})   Step 3: Setup security Before kicking off an instance, there are two security related configurations that need to be setup. First, in order to ssh into the instance later keys will be required. EC2 provides multiple APIs to accomplish this, but here I’ll use the CreateKeyPair interface. You can find the full list of available API actions in the EC2 documentation here. In fog, these calls map directly...

Read More

Defining the cloud

Defining the cloud

As with any phrase commonly served with a side of hype, trying to define a term like cloud computing requires balancing promises of technological revolutions with the practical realities of the software incarnations that are borne out in the marketplace. Based on that, in this article I’ll take a three phase approach in trying to articulate the meaning behind cloud computing: Discuss what were arguably the key industry trends that helped lead to the rise of cloud Summarize the current day technological realizations of cloud Describe the longer term relevance of cloud relative to the future of computing While this approach will of course be biased based on my own experiences and opinions, the goal is to provide a first pass overview of the evolving trends and technologies that are the core definers of what cloud represents while attempting to keep the post a reasonable length. Note, this will be geared towards those who are new to the cloud and future posts will dig into more depth across a variety of topics covered superficially here.   Cloud defined as a culmination of historical trends While there were arguably many technological predecessors to what is now termed Cloud Computing, one way to track its evolution begins, perhaps surprisingly, in the processor semiconductor industry. Moore’s Law, which predicts that the number of transistors on an integrated circuit will double every two years, has helped to sustain processor performance improvements that still continue. For many years processor microarchitects strived to improve single thread performance by combining architectural techniques with improvements in frequency. Inevitably, the cost of optimizing for single thread performance became prohibitive due to the accompanied power and thermal density issues. The industry adapted to these limitations by moving to a multicore era of processors where performance improvements are realized via parallel execution across a set of cores. While the architectural approaches evolved over the years, the net result has been a consistent improvement in the performance provisioned within a standard single or dual socket server platform. In parallel with the trends toward multicore processors and increasing server performance densities, the software ecosystem experienced its own transition in the underlying software used to operate server workloads. Traditionally a base operating system such as Linux or Windows Server was installed on servers and applications on top of that. However, virtualization technologies that were developed in the late 90s and early 2000s found a foothold in servers as well. The key use case that drove wider adoption was consolidation of applications onto a single platform, which in turn was intertwined with the performance density improvements provided by hardware. The user demand for these capabilities was highlighted by the emergency of multiple commercial and open source hypervisor solutions...

Read More
content top