Monthly Archives: April 2012

Software Defined Networking first Look at OpenFlow

Software Defined Networking is an attempt at providing a programmable network protocol that can be used to virtualize entire network infrastructures. By virtualiziing your network infrastructure Open Flow will someday allow us to overcome network vendor compatibility issues surrounding how routing protocols are implemented and also to manage layer 2 and layer 3 network constructs such as SONET and IP using a single protocol without worrying about a Layer three IP addresses or Layer Two Virtual Circuit numbers. At layer three Open Flow seems like a protocol that is used to implement distributed policy based routing at layer two i see no current equivalent tool or tech.

SDN Advantages as i see it (note my knowledge of SDN is nascent)

1: Easier to Manage End to End Infrastructure Elements at layer two and three (Reduce Opex?)

2: Network equipment prices should fall since SDN/Open Flow relies on a central controller to push policies to devices (Capex reduction).

3: Enhance Traffic Engineering at Layer two without the need for spanning tree.

4: Engineers can now acquire knowledge instead of studying vendor specific equipment commands

5: Being Open Source there will be less RFC interpretation discrepancies between vendors (Yes OSPF is implemented differently by vendors trying to enhance/lock out the competition)

Questions for the Experts:

1: While being an advantage, isn’t the programmability of Software Defined Networking also a way of adding complexity to current networking  paradigm,programming is viewed as being complex imagine if we could create macros for all our current tasks it would reduce future work but would be difficult and time consuming upfront.

2: The SDN controller will control traffic flows by adding and removing entries from flow tables embedded in our switches and routers. By using a central controller will this not add latency to policy execution in large infrastructres ?

3: Will Open Flow replace current Layer 2 and layer three protocols such as MPLS, BGP and OSPF ?

Leave a comment

Posted by on April 29, 2012 in Technology


Tags: , , , , , , , , , ,

Will Type I Hypervisors Replace Current Server Operating

A Type I hypervisor essentially encapsulates our guest operating systems as just another application. The fact that our guest OS is being executed by another layer results in some latency that is impermissible in some use cases, for example heavily utilized transaction databases. However with the advances in CPU micro architectures  and the inevitable reduction in the price of solid state storage the current execution and I/O latencies being experienced can be greatly reduced thus making more use cases virtualization friendly.

Taking a 30 foot view of future OS architectures, I see a TYPE I Hyper visor such as XENSERVER or VSPHERE becoming the physical servers Operating System while current server operating systems will evolve into lightweight hyper visor aware execution containers playing a role similar to the present day Java Virtual Machine.Future apps written written for windows are presented by the execution shell’s presentation layer (such as WPF) while access to hardware devices is transparently handled by the Hypervisor via the execution containers API functions which are implemented as interfaces that request hardware related services from the hyper visor.

The current server OS’s might evolve into hyper visor aware apps.This means they are now fully aware of their encapsulation within a hypervisor and are constructed to make calls directly to their hosting hyper visor instead of sending commands to virtual devices. Think how windows uses direct memory access today except that all hardware related calls would be sent directly to the Hyper Visor / primary operating system.

Leave a comment

Posted by on April 27, 2012 in Technology


Tags: , , , ,

Govnet should be our precursor to A J-CLOUD


The Jamaica Government has plans to launch one huge network that spans all major Government agencies and ministries called Govnet.The economies of scale that can be created by Govnet are significant but our government should consider taking things a step further.All agencies of government are responsible for purchasing software and hardware to meet their respective needs. One of the main benefits of Virtualization and by extension multi tenant cloud infrastructure is the optimization of hardware resource utilization.

The cost of maintaining various IT infrastructures is significant so it would be wise to create on top of the E-Learning Jamaica physical network an MPLS based Govnet network which will interconnect all Government agencies. This Govnet can be used as the highway serving information from the governments cloud called J-Cloud or any suitable identifier. A J-Cloud can provide Desktops as a service,Email,Unified communications and host applications that are peculiar to each government agency. The infrastructure could be deployed using Flex Pods or Vblocks available from multiple vendors such as MS, Netapp and Vmware.

Flex Pods and Vblocks are integrated vendor certified solutions consisting of Storage, Hypervisors, virtualization and Network equipment that is used to deploy Cloud Infrastructures,they prevent the customer from having to build their clouds in piecemeal manner using equipment and software that is not certified to work together.

The benefits of a J-Cloud are:

1: Optimization of IT hardware utilization

2: Reduced Licensing cost since all agencies can potentially access one set of licenses

3:  Effective collaboration and access to data via cloud hosted virtual desktops which can be accessed on many  mobile devices

4: Increased access to applications by all agencies

5: Ability to Scale Up as needed by adding infrastructure components as needed

Future government wide infrastructure projects such as Internet Telephony encompassing all agencies.

Leave a comment

Posted by on April 23, 2012 in Technology


Tags: , , , , , , , ,

The Cisco Supervisor 2T Long live the King


For those of us who manage networks with Cisco 6500 series switches in their core, I am sure the wide array of high bandwidth switches from Cisco and their competitors have caught your attention. The 6500 series provides 10GB performance at 80Gig per slot when coupled with supervisor 2T modules. The question you should be asking ,is how much throughput do I need in the future?. If your bandwidth needs are growing exponentially and you want to have data center traffic traverse your core switches then upgrading your 6500 switch might not be a such a good idea since data center and its aggregated bandwidth demands can be significant in a medium to large organization.

However the drawback to acquiring new switches relate to their price, a Supervisor Upgrade would be much cheaper than acquiring a similar sized switch. Size though also  does not truly reflect the state of affairs since they are now 2U sized switches that  are able to out perform an upgraded 6509, example of such a devices can be found in the Nexus 5000 series switching line. If you run separate data center and user networks (as you should) then a 6509 with a supervisor 2T module results in a  relatively inexpensive upgrade that will cause only minor disruptions and results in less headache at 3x your current performance. Upgrading your switch using a smaller unit will always pose challenges as it relate to re-cabling and re-arrangement of your core network which is always daunting due to the differences in the number of switch ports. Sometimes we need to stay put until we have a clear need for change, so if you are not oversubscribing your current infrastructure but want to future proof it then the supervisor 2T provides reasonable investment protection.

Leave a comment

Posted by on April 21, 2012 in Technology


Tags: , , , , , , , , ,

Managing Virtual Desktop Boot Storms

When designing a Virtual desktop solution IOPS is king. The rate at which data can be written and read from central storage is usually the main component that determines the acceptability of a Virtual Desktop Infrastructure (VDI) solution. Usually large amounts of fast hard disks are used to provide the necessary IOPS needed to serve data to our virtual machines,  but eventually even the best designed systems struggle when confronted with boot storms.

A boot storm occurs when many users during a short time period, power on their virtual desktops. The IOPS required to load operating system and application files at boot usually surpasses the amount needed to perform daily tasks as such word processing. The entire system may grind to halt due to inadequate Storage performance.

Now how do we solve this issue ? We could throw more spindles (hard disks) at the problem which will result in a lot of wasted storage capacity or we could use solid state drives to store the files required by the Virtual Desktops at Boot. A solid state drive though expensive can be used to assist in dealing with boot storms since they are typically 25-30 time faster than  the fastest hard disk. While wonderful solid state drives are not cheap, so you may also take a look at storage area networks that are able to cache frequently requested blocks of data, these systems can also be used to provide greater boot time performance for your VDI setup and ensure end user acceptance of this solution.


Leave a comment

Posted by on April 19, 2012 in Technology


Tags: , , , , , , , , , , ,

Enhancing Disaster Recovery Using Virtualization

The ability to convert physical computers and servers to virtual machines is an often an understated benefit of server virtualization. Virtual Servers are essentially flat files stored in a proprietary format.These files can be created periodically and stored on backup media for disaster recovery purposes. Business’s that have have strict recovery times and need greater availability may virtualize their physical servers and have these virtual machines hosted on physical servers located offsite. The Physical servers at the companies main office and The Virtual Servers offsite could be configured to sync with each other while clustered in master and slave setup with the Virtual Machines serving as slaves backing up a their physical masters. If a physical server goes down a Virtual server will be able to handle processing in few seconds. Tools such as Double Take can be used to synchronize the data residing in disparate locations while a WAN circuit is needed to link both locations .

Leave a comment

Posted by on April 12, 2012 in Technology


Tags: , , , ,


The days of business’s using expensive T1 lines and leased circuits are now coming to an end. MPLS and Metro Ethernet WAN solutions have now become the main WAN technologies being employed by most medium to large organizations with multiple branches such as banks.

While the advantages of these new high bandwidth WAN solutions are many, most small businesses especially those in Jamaica may not be able to foot the cost of acquiring even the most basic MPLS or Metro Ethernet WAN services.

Well a small business owner need not worry about creating his own Wide Area Network.Small businesses owners who are unable to finance the cost of these new WANs need to leverage Virtual Private Network (VPN) Technologies and the Low cost of High Capacity residential internet packages to create Wide area networks that are based upon site to site VPNS.

Three small stores each having an entry level router,a static ip address and a small business class broad band service can easily construct their own small WAN that can be used to transmit business,email and ip video data between these branches.This solution while not as scalable as the solutions sold by service providers, provides a cost effect way of providing wide area connectivity at a fraction of the cost.

Lowering Branch Office Connectivity Costs

Leave a comment

Posted by on April 10, 2012 in Technology


Tags: , , , , , ,