Tag Archive | "compute"

Facebook’s New Open-Source Data Switch Technology Is Designed For Flexibility And Greater Control

Tags: , , , , , , , , ,

Facebook's new open source top of rack switch broken down by component parts.

Google Bets Big On Docker With App Engine Integration, Open Source Container Management Tool

Tags: , , , , , , , , , , ,


Google Makes CoreOS A First-Class Citizen On Its Cloud Platform

Tags: , , , , , , , ,


Facebook building data center in Sweden using new architecture

Tags: , , , , , , , , , , , ,

A rendering of Facebook’s Luleå 2 Rapid Deployment Data Center (RDDC)

If you’re near Luleå, Sweden, you could witness the first Facebook data center being built using a new kind of architecture.

Rapid deployment data center design (RDDC) is a new kind of building concept from Open Compute Project, an industry-wide coalition of technology companies that is creating cost and energy efficient designs and sharing them for free under an open source model. This new design idea that will allow Facebook to expand its capacity twice as fast. The concept was discussed during the Open Compute Summit in January. This will be the second data center building in Luleå, but the first using this new architecture.

This new approach to data center design will enable Facebook to construct and deploy new capacity twice as fast at its previous approach. It will be much more site-agnostic and reduce the amount of of materials used during construction.

The RDDC concept for Facebook began with a hack. In October 2012, Open Compute’s data center strategic engineering and development team and several construction experts came together to hack on a design for a data center that would look less like a construction project and more like a manufactured product. From this hack, a couple of core ideas for streamlining and simplifying the build process emerged.

The first idea developed during the hack was using pre-assembled steel frames. This concept is similar to that of a car on a chassis, where the frame is built and components are attached via an assembly line. In this model, cable trays, power bus ways, containment panels, and even lighting are preinstalled in a factory.

The second idea was Ikea-inspired flat-pack assemblies. Instead of creating a data center where all the weight is carried by the roof, Open Compute sought to develop a concept where the walls of a data center would be paneled to fit into standard modules that would be easily transportable to a site, much like an Ikea bookshelf fits neatly into one box.

Construction on the Luleå data center is expect to being soon using RDDC designs.

Article courtesy of Inside Facebook

Amazon’s New AppStream Service Lets Mobile Developers Stream Their Games And Apps From The Cloud To Any Device

Tags: , , , , , , ,


Amazon today announced a new service for mobile developers at its re:Invent developer conference in Las Vegas today. Amazon AppStream, which uses the company’s recently launched g2 EC2 instances, allows developers to easily stream their applications in high definition from the cloud to any mobile devices. Amazon is specifically marketing this for mobile developers, but there’s no reason desktop apps couldn’t use this service, too.

The service is now in limited preview and developers can sign up for access here.

This new service, Amazon says, will allow developers to “build high-fidelity, graphically rich applications that run on a wide variety of devices, start instantly, and have access to all of the compute and storage resources of the AWS Cloud.”

Using Amazon STX, a new protocol developed by the company’s engineers, developers can now stream anything from the interactive HD video of complex 3D games to just the computationally intensive parts of their apps right from the cloud. Using Amazon’s g2 instances on EC2, developers can now just render all their graphics in the cloud.

Apps using AppStream can use all of the device’s sensors, too, and then send this data back to the cloud.

This, as Amazon’s VP of Amazon Web Services Andy Jassy noted in his keynote today, means mobile developers now have easy access to resources that wouldn’t otherwise be available on a mobile device. As mobile devices get smaller, he argues, the cloud becomes more important. Many of the most popular apps are already running on top of the cloud (and AWS specifically). This, the company says, means an “application is not constrained by the compute power, storage, or graphical rendering capabilities of the device.”

Article courtesy of TechCrunch

10 Startups In The VMware Universe Worth Tracking This Week At VMworld

Tags: , , , , , , , , , , , ,


The VMworld annual virtualization geek out begins this week in San Francisco. The big topic that will dominate all others: the radical transformation of the data center as a flood of data makes the old IT ways just seem antiquated and ill-fitted to the reality of a new mobile-first world.

A host of startups are emerging that leverage VMware’s dominant position in the enterprise. Here are ten worth tracking this week and the months ahead:

  • CloudPhysics collects and analyzes virtual machine data from data centers to give IT a way to simulate potential problems that they may encounter when introducing new cloud services. The company uses data analytics to also help with the decision-making process, giving customers a way to better choose vendors, avoid costly downtimes and keep in check the ever-increasing human costs that come with IT. Its platform pulls virtual machine data from multiple data centers and then models it for customers to do simulations. For instance, a customer considering flash storage could use the service to simulate how various configurations from different vendors would fit in their static data-center environments.
  • Nutanix plays in the software defined storage space — a topic that should get a lot of attention this week at VMworld.  Enterprise customers have long kept storage separate from the servers. Nutanix takes a different approach. It wraps that storage into commodity x86 servers, helping reduce the space needed for big box storage attached networks (SAN) and networked attached storage (NAS) environments.
  • Cloud Velocity makes the cloud a seamless extension of the data center. Software is installed in the data center and in the cloud with access to the compute, storage and networking. The technology allows companies to run application in Amazon Web Services (AWS). In the coming months, the company will expand to other infrastructure environments.
  • HyTrust technology is designed to secure virtualized data centers that take all the compute, storage and networking and put it into one software layer. Virtualization administrators can manage everything through management platforms, exposing organizations to considerable risk. An administrator can erase an entire data center or copy a virtual machine with relative ease. HyTrust offers a way to manage data between the administrator and the virtual infrastructure. It offers a role-based system that can help monitor what a person is doing as compared to what they should be doing.
  • Tintri provides storage for virtualized data centers. It is one of the next generation flash storage providers that are putting pressure on traditional giants such as EMC and NetApp. The storage is designed specifically for virtualized environments.
  • Vormetric provides enterprise encryption for databases and files across the enterprise. It offers tight access controls to ensure only authorized users and applications can have access to the data.
  • In a post last week, I wrote about AnsibleWorks, which offers an orchestration engine that allows users to avoid writing custom scripts or code to manage applications. The open-source project is designed to open up IT automation to a less technical community. In turn, that also means less reliance on traditional IT, faster delivery and better time spent on important projects. Ansible is different from most IT automation offerings. It does not focus on configuration management, application deployment, or IT process workflow. Instead it provides a single interface to coordinate all major IT automation functions. It also does not use agents nor does it require software to be installed on the devices under management. Puppet Labs and Opscode are two of the more mature startups in the DevOps and IT automation space.
  • Tier3 is a Seattle-based company that provides a service layer to give IT the flexibility of its infrastructure and managed services to make cloud technologies more accessible. In its latest release, Tier 3 launched the capability for architects to design network configurations in the public cloud that for the most part mirrors the networking common to internal data centers.
  • Pernix Data provides a Flash Virtualization Platform (FVP). The technology clusters flash to get higher levels of performance. It’s similar to how VMware aggregates CPU and memory to give customers more for its server infrastructure. The advantage comes with getting more out of a flash-based server and reducing the need for storage, one of the greatest costs for today’s enterprise customers.
  • PureStorage offers enterprise storage that takes advantage of flash memory. According to the company, its products accelerate random I/O-intensive applications like server virtualization, desktop virtualization (VDI), database (OLTP, rich analytics/OLAP, SQL, NoSQL) and cloud computing.

All of these startups reflect how virtualization and the advancements in software have made it possible to manage data at a granular level.  It’s a wholesale change, reflective of significant overall change in the enterprise.

Update: Please add any startups to the comments that I do not have on this list.

Article courtesy of TechCrunch

Debian Will Serve As The Default OS For Google Compute Engine

Tags: , , , , , , , , , , ,

Google Compute Engine-1

Google is bringing Debian to Google Compute Engine and is making it the default OS  for developers using the service. Google will support both Debian 6.0 and Debian 7.0, which was released this week.

There are some pretty clear reasons why Google is making Debian the default OS. First of all, it’s free, said Krishnan Subramanian, a cloud analyst and founder of Rishidot Research. “With Ubuntu and Red Hat, Google has to deal with the vendors who want to make money themselves,” he said.  Further, Debian has  a large customer base. And it fits with Google’s geeky culture.

In its blog post, Google cites improvements in the Debian 7.0 “wheezy,”  release.  It has hardened security, better 32/64-bit compatibility and it addresses community feedback.

Google states that it will evaluate other operating systems that it can enable with Google Compute Engine.

It’s important to note that Google Compute Engine is only available for subscribers to the $400 Gold Support package.

This all looks like a tune up for next week’s Google I/O event where there are expected to be announcements about  Google’s cloud computing strategy.

Debian competes with other Linux-based operating systems such as Ubuntu, Mint and Fedora.  According to DistroWatch, Debian ranks fifth in page hits. Mint is in the top spot.

Article courtesy of TechCrunch

Google Opens Up Compute Engine To All Developers Who Buy Its $400/Month Gold Support Package, Drops Instance Prices By 4%

Tags: , , , , , , , , , , , ,


At last year’s Google I/O developer conference, Google launched Compute Engine, a cloud computing platform that allows developers to run their apps on Linux virtual machines hosted on Google’s massive infrastructure. This was a limited launch, however, and developers had to either get an invitation or go through Google’s sales teams to get access to this service. Starting today, developers who subscribe to Google’s $400 per month Gold Support package with 24

Open Stack, Open Compute And How Opening Up Is The Only Way To Reach The Data Heavens

Tags: , , , , , , , , ,

“Open” this, “open” that: It seems like everything is “open” these days. Well hello, people, Bill Gates’ gravy boat has run aground. It’s time to get out the Starship and fly in those open clouds.

Hokey? You bet your SaaS this is hokey. But come on, take a look at what happened last week at the Open Compute Summit. The gods have said: “We are really going to mess with your shit now. You’re going to open that hardware up. Otherwise, you will never see the data heavens. Get some soul in those machines. It’s time.”

So who better to talk to than OpenStack Executive Director Jonathan Bryce? OpenStack is the open cloud OS. It’s the software counterpart to the Open Compute Project and its world of open hardware.

Bryce took some time at the Open Compute Summit to lay it all out: OpenStack, Open Compute and how the two relate. The cloud is opening up. Listen to Bryce explain it in the video above. You’ll see that those data rockets won’t be built by a few, proprietary wunderkinds. It’s different now. Borgs working at the mother ship won’t take us to the data heavens — communities will.

Article courtesy of TechCrunch

Open Compute Project: Can Facebook Help Save The World?

Tags: , , , , , , , , , , , , , ,


Hardware generally doesn’t interest me too much, so when I heard about the Open Compute project I didn’t give it too much attention. Casually reading up on the subject a little more left me even less interested. Why should Facebook have to design their own hardware, I wondered? Wouldn’t hardware vendors be clambering over each other to supply Facebook with gobs and gobs of servers for their data centers?

Amir Michael, Facebook’s hardware lead, discussed the Open Compute project in a keynote presentation at LinuxCon. He laid out the root problem: hardware manufacturers, in an effort to provide differentiation, were actually creating more problems than they were solving. The on-system instrumentation that OEMs provided for Facebook created additional complexity, and ultimately wasted space and produced unnecessary heating concerns.

The HPs and Dells and IBMs of the world had established a very successful business for themselves selling servers with their own customizations, and in smaller quantities those customizations did provide some modicum of benefit to their customers. When you’re buying several hundred servers from a single manufacturer, that manufacturer’s management tools are easy to consume.

But when you’re buying several thousand servers at a time from multiple vendors, the different management tools simply get in the way. The differences between chassis designs and motherboard layouts complicate service issues for the data center staff.

Facebook made the remarkable decision to solve this problem for themselves. They designed their own power supply, which reached 95% efficiency. They designed a vanity free server case, which provided easier access for technicians. This resulted in an unexpected benefit for heat dissipation. They went on to design a motherboard with no cruft: just the absolute essentials for the computing requirements. This mainboard was cheaper to produce, and also shared improved thermal properties. And finally, Facebook redesigned the venerable server rack to make it substantially easier to access, move, and deploy.

An important, but oft-overlooked ancillary benefit to Facebook’s vanity-free and minimalist designs is that they involve less waste, both in the production process but also in the disposal process. When you’re buying thousands of servers, this becomes a very important ecological issue. Computer waste is a serious environmental concern, and too many consumers of technology ignore the consequences of disposal.

Recognizing that their data center headaches couldn’t possibly be unique, Facebook shared all of their design specifications, CAD drawings, and reference materials under open licenses to their newly formed Open Compute Project.

The reason for this decision, as Michael said in his presentation, is that “openness always wins.” He pointed to the advent of the USB standard as the perfect illustration of this point. Prior to USB, the PC industry was plagued with finnicky peripherals and an abundance of sub-standard interface options. USB, developed openly and in collaboration with multiple interested parties, reshaped the peripheral market into what we enjoy today.

My first question to Michael was “Why didn’t the market solve this problem?” Specifically, why didn’t any of Facebook’s hardware vendors recognize the problem and address it. He pointed out that the bulk of the work began in 2009, when Facebook was considerably smaller than it is today. None of Facebook’s vendors really saw the scale to which Facebook could grow, and as such didn’t see a need to change their products in any meaningful way. The notion of “scale out deployments” hadn’t quite hit the mainstream.

Michael shared with me that all of their internally developed specifications are shared with multiple vendors, and manufacturing proposals are reviewed internally through a democratic process. Each proposal is analyzed according to a number of factors.

When a hardware design is approved for manufacturing, Facebook always uses two vendors for production. The end result is two identical products from two discrete vendors; but this results in supply-chain diversity and improved product continuity: both of which are important factors when dealing with production runs at the scale Facebook demands.

Michael pointed out that all of the benefits of scale out development — power, cooling, ease of access — benefit small and medium business consumers just as much as enormous data centers. He also shared that the response to the Open Compute project was unexpected. The reference designs were adopted by participants in a number of different markets and tweaked to provide the kinds of benefits needed in those markets.

Historically, large scale providers have been cagey talking about the details of their infrastructure. As a result of the Open Compute project, more and more organizations are growing comfortable talking about the specifics of their data centers. This is slowly resulting in better design and implementation decisions, which will in the long run be better for the environment.

Say what you will about Facebook’s business and marketing decisions, but you can’t argue that they’re doing the world a favor by reducing waste in computer manufacturing designs. The issues involved will only get more important as more and more technology is manufactured. The Open Compute project is a great start. We need more involvement in things like this. We also need to make sure that we’re adequately dealing — as an industry — with the proper disposal of end-of-life hardware.

Article courtesy of TechCrunch

July 2014
« Jun