Please Note: This website has been archived and is no longer maintained.
See the Open Networking Foundation for current OpenFlow-related information.

OpenFlow News

Posts Tagged ‘GENI’

OpenFlow at GEC9

November 7th, 2010, Guido Appenzeller in OpenFlow Blog

At the GENI Engineering Conference 9 last week in Washington DC there were a number of demos based on OpenFlow in both the demo session on Tuesday evening as well as during the plenary on Wednesday morning . In fact, it was very exciting to see how the majority of the demos at least in the plenary session were using OpenFlow as well as the Slicing infrastructure that GENI uses to manage its OpenFlow based networks. In the research community, OpenFlow is on its way to becoming the tool of choice for innovative new networking research and large scale experimentation.


Guru Parulkar on stage at GEC9

For a complete list, have a look at the List of Plenary Demos and the List of Posters on the GPO Web Site. One of the most amazing demos from the demo floor was Indiana Universitie’s GlobalNOC WorldView in 3D. If Minority Report meets network management, this is how it would look like.

The Stanford OpenFlow team was present with two demos. The updated Load Balancing demo used VoIP to allow the audience to participate vie their cell phones. It also used a much larger topology spanning 10 local networks that spanned the continent and the National Lambda Rail backbone. It was followed by a new mobile handover demo from Stanford. It showed video streaming from a golf cart driving around the Stanford Campus. Stanford’s OpenFlow Network allowed the mobile client to make simultaneous use of multiple WiFi base stations and the Stanford WiMax deployment, without needing Mobile IP or any other similar solution. Demo pages for both demos should be up in the next weeks and we’ll post here again.

Congratulations to all the teams at GEC9! This was definitely the most impressive set of demos I have ever seen at GEC, and possibly at any networking conference.

OpenFlow Demos at GEC8

July 21st, 2010, Guido Appenzeller in OpenFlow Blog

Today was the demo session at the GENI Engineering Conference in San Diego and the demos included a number of OpenFlow systems.

OpenFlow at GEC8

  • Integrated Control Framework Demo by a joint team of Stanford University and BBN. Using the OMNI command line tool, a researcher can reserve both PlanetLab compute nodes as well as an OpenFlow based networking substrate. The demo used the Expedient aggregate manager for OpenFlow Networks as well as the Opt-In manager. Essentially all of this demo came together over the past 4 weeks due to a heroic effort of the Stanford and BBN teams. Wiki page with more information is here.
  • Expedient, a control framework with a graphical UI for OpenFlow based resources. The version demonstrated additionally can be accessed via the GENI API through a proxy.
  • Aster*x, the OpenFlow based load balancer. This is the successor to the plug-n-serve system and the demo ran across a number of OpenFlow networks including Stanford, BBN, Princeton, Indiana and University of Washington.
  • Transport and Aggregation. This was a combination of the aggregation demo from SIGCOMM 2009 and the optical transport integration done together with Ciena. Details here.
  • WiMax. A demo from the OpenRoads team done together with two other WiMax demos at the conference.
  • Clemson University showed their graphical UI for configuring the slices on their local OpenFlow deployment. The UI looked great and there are a number of similarities with the Expedient UI.

Thanks to the 20+ people involved in putting these demos together, they were a big success. A few pictures below, more in the photo gallery of the demo session.

GENI announces $10.5m in NSF funding for large-scale prototypes

October 26th, 2009, Guido Appenzeller in OpenFlow Blog

chipelliotBBN Technologies today announced $10.5 million in NSF funding for large-scale prototype deployments of new networking technologies (Full Press Release). It is exciting to see a first generation of GENI research move from the laboratory to live networks across the continent.

Currently negotiations for on scope and amounts for the individual projects are ongoing and nothing is final yet. That being said the current plans are for a substantial part of the funding to be used in OpenFlow deployments at a number of universities and backbone networks. Schools previously mentioned as participating include Princeton, Rutgers, Clemson, Wisconsin, Indiana, Georgia Tech and University of Washington with NLR and Internet2 connecting them. A number of networking hardware vendors have committed to providing OpenFlow enabled switches and routers for the deployments. We’ll update you on the details as they are being announced.

In the men time congratulations and thanks to Chip Elliot (pictured to the right) and his team at the GPO for having taken another major step to move the GENI vision forward.

Enterprise GENI featured

October 5th, 2009, Guido Appenzeller in OpenFlow Blog

Enterprise GENI, the OpenFlow based Network Substrate that is part of the large-scale GENI effort funded is featured on the GENI home page today. GENI uses the FlowVisor with an added-on Aggregate Manager to virtualize a network. Recently at Stanford we demonstrated how to use eGENI together with PlanetLab, allowing control of both computing and network infrastructure through a single framework. For more information, have a look at the article.

GEC3 Demo Photos and Slides

November 17th, 2008, Guido Appenzeller in OpenFlow Blog

Two weeks ago we had a great demo of OpenFlow at the third GENI Engineering Conference. Things we demonstrated included:

  • A centrally controlled OpenFlow network with OpenFlow switches deployed at Stanford, Internet2 and JGN2plus in Japan.
  • Virtual machine mobility at Stanford. You can see this in detail in the SIGCOMM Demo Video.
  • Flow Dragging. David Underhill created a fantastic UI that allows you to change the path packets take in the network by dragging the flow with the mouse to new routers an example video is shown below.
  • Virtual machine mobility within JGN2plus and between Stanford and JGN2plus. A running virtual machine was migrated across the Pacific while hosts in Japan were communicating with it. The combination of OpenFlow and our controller allowed the virtual machine to change locations and maintain connectivity without changing IP address.

The demonstration OpenFlow network incorporated switches from (in alphabetical order) Cisco, HP, Juniper and NEC.

The slides for Nick’s talk before the demo are online here.

Thanks to Glen who was the technical lead on this demo, as well as to everyone else on the 30 person team from Stanford, HP, NEC, Internet2, Cisco and Juniper who made this a success.

Photo Gallery from the GEC Demo after the jump…

Updated: OpenFlow in Computerworld

October 30th, 2008, Guido Appenzeller in OpenFlow Blog

Tim Greene from Computerworld has a very nice article about OpenFlow, vendors that have implemented it and the demo at the GENI Engineering Conference. It is also up on Networkworld.

The GENI Demo just happened a few minutes ago, and it safe to say it was a huge success. We demostrated both virtual machine mobility as well as arbitrary flow routing. More exciting updates on OpenFlow coming soon.

Update: International coverage of OpenFlow in Japanese, Portuguese, Italias, Spanish, Polish and Swedish after the jump.


OpenFlow demo at the GENI CIO Meeting in Chicago

August 27th, 2008, Guido Appenzeller in OpenFlow Blog

We gave a short presentation of OpenFlow in the context of Enterprise GENI at the GENI CIO Meeting in Chicago today, and finished it with a live demo of the system running at Stanford. Everything worked very well, with the dashboard running in Chicago and us being able to demonstrate VM mobility and flow optimization (Thanks Glen and David U!).

There seems to be a lot of interests from University CIOs in OpenFlow as a potential tool for networking research. Specifically the ability to run production traffic and experimental traffic on the same switching hardware with good separation received a lot of questions. There seems to be a natural tension at Unviersities between networking researchers that want maximum flexibility, and the people operating the production network that want to keep everything stable and secure. An OpenFlow switch that separates OpenFlow and production traffic by VLAN seems to provide at least a partial solution to this problem.