And now for something completely different…

Its been a few weeks since I added to my blog, and I swear there is good reason for that. December and January have been very interesting months for me, culminating in the biggest career move I have made to date.

When I started working in the public sector a few years ago, I made the decision to leave my life in the private sector to join government with a clear purpose. I wanted to apply my abilities and knowledge to make a positive impact on the lives of my fellow citizens. I wanted the ability, and the authority to leverage technology to positively change the way the government does its business, improving services and processes, saving costs, and providing new channels and services to the citizens. I was fortunate to be offered to join the District of Columbia OCFO, as the Deputy CIO, a position with the authority and visibility to do just that. Still, I deliberated about the decision to join the DC OCFO very thoroughly and carefully. I validated my assumptions and aspirations. My analysis showed me that it was the right choice to make, and in retrospect, it has been one of the best decisions I have every made in my career and in my adult life.

Working at DC over the last few years has been one of the most challenging, and the most rewarding experience of my life. I have had the opportunity to work along side some of the most remarkable public servants. I have seen my colleagues and peers work hard and apply their passions and dedication to make a difference in the lives of the taxpayers of the District every day. I also learned how difficult it is to bring about fundamental change in the business of government. I experienced the challenges related to technology, policy, process and politics involved in the day to day operations of government. I also learned that despite the challenges and difficulties, there are many hard working and motivated public servants who try their best every day anyway, in the face of apparently insurmountable odds; And that if you are courageous enough to challenge the status quo, you will find allies in unlikely places.

I also got a chance to geek out in some really cool ways on a shoestring budget. I helped introduce many modern technology paradigms and platforms within my organization, including:

  1. A fundamental security re-architecture of the network improving performance and security while reducing costs.
  2. Relocation of over 1200 employees to a new facility to save significant costs for the government
  3. Enabling end users to work from anywhere using mobile technologies and profiles
  4. Introduction of mobile platforms and diverse end points including iPhones, iPads, Android devices
  5. Implementation and modernization of several legacy back end systems, including the District’s core financial system
  6. Implementing a data management and business intelligence platform to act as the engine driving many transparency, data sharing, dash boards, and performance management initiatives
  7. Implementation of a first of its kind budget transparency dashboard for DC at
  8. Rollout of our first iPhone app allowing citizens and policy makers to have access to the District’s budget and spending data in the palm of their hands
  9. Greening of our data center, leading to up to 70% of our infrastructure running on a tightly managed virtual platform
  10. Implementation of the first large scale cloud based ERP initiative, moving the new core financial system to operate in the cloud
  11. Implementation of a digital workflow platform getting rid of dozens of archaic paper based business processes.
  12. Building a team of rockstars who spend long days, nights and weekends giving their best to the service of the District’s taxpayers.

I am in no way taking credit for all of these initiatives for myself, but I did play a role in making them successful, alongside my team. All of this great work continues.

While all this excitement was going on, I saw a tweet from @caseycoleman (GSA CIO) in June 2010 linking to an exciting vacancy announcement for GSA’s Deputy CIO/CTO. I have admired all the great work Casey and her team have been doing at GSA for a long time. GSA has been at the forefront of doing some amazing things, from the first Cloud Email and Collaboration implementation within Federal government, to establishing and using many new technology and new media initiatives, to setting the stage for all of Federal government in how to procure technology in a more intelligent way, as well as being the Agency carrying the flag on sustainability and green government. Under the leadership of Administrator Martha Johnson and Casey Coleman, the GSA IT community is setting the stage for many great things to come in the public sector. As you can imagine, I was immediately intrigued and interested. So I decided to put my name in the hat, containing, no doubt, many other qualified candidates’ names.

After a long process, the details of which I will spare everyone, and an opportunity to meet and have frank dialogue with many of GSA’s key stakeholders, I am extremely humbled to have been selected for this role. I started in this new role last week. Along the same lines as a similar decision several years ago, I deliberated this decision very carefully. The decision to leave the service of the District was not an easy one. It has been difficult to move on from the company of so many amazing colleagues and the opportunity to serve the community that I love. But in the end, I was convinced that I made the right decision to join the GSA, in light of the the opportunity to work as part of such a fantastic team, under such great leadership, and being part of many exciting and challenging initiatives.

As I hope you can imagine, the last couple of weeks have been a whirlwind of transition activities. As I transition into my new role at GSA, I will be working hard to make sure that I can add value to a team of extraordinary professionals. I also hope to return to my personal blog more often as things get settled over time.

I wanted to say thanks to all who have wished me well through this transition. I appreciate your kind words and will continue to push the agenda for Govies and Geeks everywhere.



Security is more than firewall ACLs

I am en route to see family via train and sitting in the cafe car eating a hot dog, and the gentleman sitting behind me picks up the phone to call (presumably his aunt or mom). After pleasantries and holiday wishes, the conversation turns to the peculiar topic of voicemail security and passwords. In the span of about 3 minutes, the guys first describes how the listener should set a 4 digit code for their voicemail so “no one can hack into your voicemail” and then proceeds to relay his own example by providing every bit of personal information to a car full of passengers (see my birth date is august xx, 19xx and my last name is xxxxxxxx so my voicemail password is xxxxxxxxx but my computer password at work is xxxxxxxxx. My bank didn’t like that password so I added my birth year to the end…). I had to physically restrain myself from turning around and bonking the guy on the head.

First I started thinking about how clueless the average person is about information security. Quickly though, I caught myself and realized how we, the geek/IT community have really failed the broader user community over the last couple of decades when it comes to IT security.

First, in the interest of self preservation, we made IT and INFOSEC into a mysterious giant that lives on the hillside that only the blessed few can understand. Our general message has been “don’t worry your pretty little head about INFOSEC and ITSM. we have a Nimitz class double redundant self healing situationally aware border firewall coupled with artificial intelligence based IDS/IPS and NAC infrastructure. We have degrees and certifications you cannot even spell. Just give us more money, we got this.

Then, when someone does get infected or breached, our disdain of their inability to understand INFOSEC is palpable. “You clicked on WHAT?! don’t you know you aren’t supposed to click on links embedded in emails that appear to come from your beloved aunt? You silly silly man! Now YOU have brought the entire corporate network down. I hope you are proud.

Then when convinced that educating end users is important, we set up some pretentious security program that general says some variation of “dont be stupid and you will be fine.” leaving many non geeks to say “but I have no idea what not being stupid means”.

I’ve seen end users in my government agency so petrified that they refuse to open legitimate emails, click on links to properly authorized internal sites and collaboration portals and documents, and even mark these emails as spam (resulting in not getting any more emails from the boss, or a colleague, or the help desk… Maybe not all that troublesome after all!). And then saying things like oh I get so much spam I routinely miss important information.

Fellow geeks, it is in our own self interest to educate non techie end users and colleagues about real, practical information security and management in a non hostile manner. Help them understand concepts in layman’s terms. Help them see parallels between information security and physical security (you wouldn’t give a stranger the keys to your house would you? Well your password is like the key to your computer system). Build a partnership that allows users to ask frank questions rather than cowering in fear. Lets try to turn this tide in the new year

Digital Divide is a poor choice of words

I have been thinking a lot about issues and cultural influencers contributing to the digital divide issues in America.  I have mentioned certain aspects and manifestations of Digital divide in my previous blog posts here, here and here.

One of the prevalent misunderstandings among policy makers regarding Digital Divide issues is that it is somehow a technology problem to solve.  I think its the word “Digital” that throws people off.  Its no surprise that this is one of the issues usually handed off to the city/county/state CIO to go away and figure out.  A couple of touching speeches, maybe a strategy document, and a few memos later, the problem gets put on the back burner until the next election cycle.

The reason for this indifference often times isn’t that policy makers and government leaders don’t care.  It’s that Digital divide issues are actually inherently very hard to “solve”.  In fact, shooting for “solve” may be the absolutely wrong thing to do, much in the same way as “solving” social security or immigration problems are quaisi-impossible.  What we should then do is to identify and work towards incremental improvements and achievable milestones through a combination of policy, technology, outreach, education, opportunity and competition.  Achievable milestones and planning horizons of 12-18 months, are a better strategy to actually get something done in the right direction.

What we need to understand very clearly is that Digital divide issues are often not inherently technology issues.  That’s why it may be best to start calling this group of issues by a different name.  Sure, technology, or rather access and ability to use technology, is a major component of the “problem statement”.  However, that is just defining the symptoms.  The root causes are often not rooted in a “lack of access to technology”. While access to technology (computing resources, broadband access, etc.) is an important contributor, the underlying causes are usually more deep rooted in economic inequality, education, language barriers, lack of training/learning opportunities, and the inability of our education system to make technology a corner stone of individual capability and knowledge.

The CTO can do a lot to help, but cannot lead the charge.  city level CTOs can certainly move forward in the “Access” area, by leveraging existing city resources to provide high speed access to under served populations in the city.  The District of Columbia DC Net, a part of the Office of the Chief Technology Officer is doing a lot in this area, by extending the city’s existing fiber optic network to serve under served areas within the city, leveraging ARRA grants provided by the Federal government.  Its definitely a step forward, although there are several issues still unresolved, including typical last mile issues, and the policy/operational issues inherent in the city becoming the de-facto ISP for hundreds or thousands of households within the District.  Moreover, while the District has had some success in this area, other cities or localities may not be as fortunate due to a lack of resources, established platforms, or geographical challenges.  DC Net is also providing high speed wireless access via hundreds of hot spots across the city, including the National Mall.

However, despite these well intentioned efforts, many other stakeholders need to be involved to address the root causes of Digital Divide issues.


In my analysis, the following stakeholder groups need to gear up and get involved:

  1. Policy Makes/Elected Officials: Elected officials and policy makers need to clearly understand Digital Divide issues and make them a priority, establish policies that address the root causes and establish funding and programs that help with the root cause issues.  Some examples, such as the recently introduced legislation mandating GSA to provide high speed wifi access within all GSA managed properties are positive steps in this direction, potentially serving surrounding communities, but policy makers also need to address the difficult aspects, including education, training, opportunity, etc. This can be done in a variety of ways, including provide direct (grants) or indirect (tax subsidies) for employers to provide technology training to employees, or for non profits to provide technology assistance and training to underserved communities.
  2. Non-profit and for-profit higher education institutions: There is huge opportunity for community colleges, and for profit colleges (Phoenix, Strayer, etc.) and vocational/adult education schools to offer technology assistance and education programs to under served communities, and providing access to computer resources when they are not being otherwise utilized.  This can also be done by offering volunteer opportunities for advanced college students/graduate students, lab technicians and faculty.
  3. Public Education Systems: Public education systems must analyze and modify their curricula to integrate and weave technology education within the education program starting at the primary school level.  More that providing computers in the classroom, the curriculum should be designed to be fully integrated with online research, course work, usage of online tools and trusted online resources, including internal knowledge management portals that allow faculty to share resources, notes, questions and ideas across jurisdictions and school systems.
  4. Non-profit social organizations: Organizations such as AmeriCorps, CityYear, and even the peace corps can be powerful players by using volunteer networks to provide technology assistance and training to under served communities.
  5. Departments of Correction: Most local departments of correction offer some programs to re-integrate incarcerated individuals into society upon release.  Technology training can be made part of such programs
  6. Departments of Employment Services: Local departments of Employment services often offer programs to provide vocational skills to individuals on the fringe of employability.  Technology training and skill development should be included in these programs.
  7. LIbraries: Public libraries can provide free access to computing resources as well as set up volunteer networks within the community to provide technology training to underserved individuals in the community

If you agree with my analysis above, you will agree that the CIO/CTO cannot lead the charge in addressing Digital Divide issues in a holistic fashion.  This leadership should be provided by someone who has purview over the large cross section of public and private sector resources to address this issue from multiple angles, and bring the appropriate stakeholders to the table, perhaps the Deputy Mayor for economic/social development.

The question then is, where does the issue/challenge of Digital divide fall in the grand scheme of things and priorities facing localities? What is the “business case” for addressing this challenge (improved workforce capabilities, reduced unemployment, increased tax revenue through higher caliber average employment, reduced cost by moving more services online, etc.). What do you think? Is this an issue that governments at the local, state or Federal level should be focussed on? Is there a real business case to be made, even in the face of oppressing budgets?

Word Cloud from my blog

I created a Wordle from my blog.  Just thought this was interesting and wanted to share with ya’ll

An award deserved by many

I am extremely honored and humbled to have received the CFO Distinguish service award for 2010 from Dr. Nat Gandhi, the Chief Financial Officer of the District of Columbia.  This is a real honor for me and I wanted to share my appreciation of my entire crew, colleagues and team members who are the real champions and deserve to share this award with me.

I will write more about specific areas where my team deserves the lion’s share of the credit and recognition.  In the mean time, I will say this.  As it takes a village to raise a child, it takes a diverse hard working and selfless team to manage IT operations within a government agency, especially one that is as critical and central to government operations as the DC OCFO.  I also want to thank my boss for his strategic vision, guidance and for keeping me within the guard rails.


Digital Divide – Manifestations

I have previously outlined some thoughts an experiences related to Digital Divide issue s in my community and my observations and thoughts on steps the community and to an extent, the government can take to overcome these issues.  I am firmly in the camp where I believe Digital Divide issues are getting worse, and something needs to be done to actively counter act this trend.  These issues have the potential to damage America’s global competitiveness, the capabilities of the American workforce, and also cause day to day challenges in making our government more efficient.  See my previous posts here and here.

I went to my volunteering session today at our local library like every weekend.  My customer today was a young-middle aged man, who was adept at basic computer usage and skills.  He knew how to access the internet and perform searches using Google.  But his computer skills were still very basic.  He did not know how to use MS Word, how to save and retrieve documents, how to create online accounts, etc.

This is a far cry from the average week where most learners have never sat down in front of a computer before.  When I asked what specifically he was interested in help with, he told me his story.  Seems this gentleman had recently moved to the area from Ohio, was jobless and had a history of incarceration.  He was working towards turning his life around, and was applying for Fairfax county section 8 housing.  He also indicated that he had contacted the county, and was told that section 8 housing application can only be submitted online. He needed help in filling out his application. Now lets take a moment to deliberate:

1) First, I have no way to verify his statement that Section 8 housing application for Fairfax county can only be submitted online.  If I am wrong, then please correct me.  I can only take his word for it.

2) As outlined in my previous post, wealth is the key determinator of access to computers and the internet and the root cause of digital divide.  Section 8 housing applicants, by definition, are on the very bottom rung of the wealth ladder.  Many are displaced, have overcome severe challenges in their pasts, have a history of incarceration, homelessness, or foster care.  All in all, I think its safe to assume that this group does not have ready and seamless access to computers and online resources, and many do not possess the skills necessary to navigate a complex application online.

3) As I helped this gentleman through his application process, I realized how frustrating and confusing the online application created by Fairfax county was.  Specifically:

i) The gentleman was seeking “fairfax county section 8 application”.  Section 8 housing is a fairly common name for state/city assisted hosting programs.  However, Fairfax county calls its program “Rental program”. For about 15-20 minutes, the gentleman struggled to find out where to look for what he was seeking

ii) Once we found the URL for the Fairfax County Housing Authority, the gentleman spend another 5-10 minutes figuring out that “Fairfax County Rental Programs (FCRP) is the link that he is actually looking for.  It also doesn’t help that the Housing authority website is fairly rich and complex, which was way overwhelming for a novice computer user. (screen shot included)

iii) As you click on “Fill out your application”, the website throws up a cautionary page warning that “You are leaving the Official Fairfax County Site”. The reason behind it is that the site is redirecting you to the secure CMS system where the application forms exist, but this causes a heck of a lot of confusion for computer novices (what does that mean? I dont want to leave the Fairfax county site… Where is it taking me?… HELP!). If the CMS system target is also a county system, why is this warning necessary?

iv) You are then redirected to the County’s CMS system, with a completely different look and feel. causing further confusion as the user tries to reacquaint themselves.  Never mind the fact that the CMS does not work on either Internet Explorer 6 or on Safari.  It required assistance from the library tech support to access it via Internet Explorer 7.  Of course, there is no warning for other browsers.  It just doesn’t work.

v) The forms within the CMS system were confusing and ill designed.  Design elements were not consistent (sometimes the required field names are red, sometimes they have asterisks in front, sometimes they are red asterisks, sometimes they are brown, etc.  The drop down labelled “Ethnicity” has two values: “Hispanic and Non-Hispanic”. If this is a valid question, shouldn’t this be a check box? etc. (all values in the screen shot are made up)

4) I also realized that regardless of how easy to use the online process may or may not be, this is an example of a process that inherently requires a person to person interaction.  Applicants have several natural questions that require an answer.  There are many, many exception scenarios that must be considered, such as:

    • What if I don’t have a permanent address? It says its required.
    • What is I dont have a phone number.  It says its required.
    • I currently live out of state, but will be moving here. This option doesnt exist in the drop down
    • Of the housing options available, what are the addresses? Can I specify a preference to be near public transportation? in a particular part of the county?
    • What are the eligibility criteria? Will I get automatically rejected if I have no income? too much income? How much is too much?

After struggling with it for an hour or so, the gentleman had to “Save his application” because he needed to talk to someone at the county to get some information before finishing his application.  I hope he can figure out how to login and access his partially completed application on his own when he starts it again.

I want to make it absolutely clear that my comments here are in no way a criticism towards the great work that Fairfax County is doing to enable and support its residents and taxpayers.  Fairfax County government is highly progressive, and Fairfax county government web portal is the 2009 “Best of the Web” winner.  I think the issues outlined here can be experienced within most federal/state/local government websites across the nation.

Witnessing this process today re-inforced a couple of things for me:

1) Public sector organizations really have to think long and hard before going “online only” for core services.  I have nothing against moving core service processes online. There is certainly a strong case to be made for efficiencies, cost savings and even convenience for certain customers/end users.  However, sometimes, its just better to be able to go in and talk to someone and walk through the process with them.  This may be a good example of that.  The demographic using this process is probably not adept to figuring out an online application process.  So how can we make such a process better?

i) Make sure that some level of in-person processing capability exists.  If costs savings are critical, the in-person capability can be dialed way down (by appointment only, waiting list, limited weekly hours, etc.) similar to how USCIS offers in-person services to immigration applicants.  You have to reserve an appointment by calling ahead. There are limited hours/days of operations, etc.  But this is important for those who just simply cannot figure out the online channels, are disabled, or have questions that require a human answer

ii) Offer an online chat or help desk option whereby people can call in to ask specific questions about the process if they get stuck filling out the form.

iii) Offer the ability for people to send in OCR capable structured paper forms.  The UK government, the IRS and many others have been using this mechanism for decades.  It would still save costs. Even though this involves paper, there is very little manual intervention required with modern OCR tools to be able to capture the information into the back-end CMS.  Processing can also occur (out of band) one day a week, during weekends, or can be outsourced to save costs.  You can even assign a nominal fee ($5 per application) to cover operating expenses for this option.

2) There is really a lot that can be said and done about web accessibility, design and usability.  Many, many organizations are not very good at it.  It causes simple things to get complicated, and causes real frustration for novice end users.   There needs to be better professional certification and training for this skill/art form, and organizations need to involve experts during the system design phase of all web projects.  There is really no excuse for it.  In most cases, we can do better.

What are your thoughts? What is your organization/agency/jurisdiction doing to address these issues? What else can be done to help things in this area?



Mobile Virtual Platforms – Possible sea change

There have been a few recent developments that have individually generated an aggregate reaction somewhat equivalent to “Meh” (although the specialty markets and analysts have been abuzz).  However, taken together, I think they can form the platform basis for a Sea Change in mobile platforms.  Of course, a lot of this is based on hypothesis and doesn’t take into account industry trends related to brands, and market focus.  But I think there is great opportunity here for players in the mobile space to leverage these technologies and offer a killer product.  I’ll explain.

First, what “recent developments”?

1) VMWare to bring Virtualization to Mobile Phones (

2) Multi-Core mobile processors (Toshiba Tegra 2, Dual Core SnapDragon, etc.)

While VWMARE lists several value scenarios for implementing and using mobile hypervisors, which are all valuable, I am interested primarily in the “Multiple Profile” scenario, except, I think it needs to go a step further.  Here is what I mean:

I have mentioned before that end point convergence is real (most recently in my iPad blog post here) and organizations and government agency CIOs need to figure out how to accommodate end point convergence in their enterprise, balancing security and control over corporate use and data, against the ease, flexibility and personal use of end point devices.  The mobile devices of the future are actually mini computers, and every major manufacturer is seeking to maximize returns by designing and selling the “killer device”, one that can excel at both enterprise affinity (security, encryption, policy compliance, Exchange/Domino Support, Enterprise Apps, VPN, Remote wipe, strong passwords, etc.) and personal mobility (movies, music, photos, maps, browsing, social media, location services, personal apps, games etc.).

Handset manufacturers, especially Blackberry and Apple have actually come a long way towards this goal, with the latest iPhone hardware and OS at “implementation under testing” stage for FIPS 140-2 certification with several enterprise management improvements in iOS4.2, and Blackberry getting better in the consumer space with OS 6, Blackberry App world, and new handsets like the Torch with a focus on Social Media.

However, organizations are still concerned about the implications of end point convergence. I have seen CIO shops to have several lingering questions, and compromises are typically made around them:

1) Should we let employees hook their personal unmanaged smart phones into the enterprise network? What is the risk? How can it be managed?

2) Should employees be allowed to download games and music onto company provided smart phones/tablets? What are the risks? How can they be managed? (see recent NPR story here)

3) Should we be able to monitor employee activities on company supplied smart phones? (IM, SMS, Email, browsing history, GPS locations?)

4) Should we be able to monitor employee activities on personal smart phones that are hooked into the corporate network? (IM, SMS, Email, browsing history, GPS locations?)

5) What about data retention/archival?

6) What if someone downloads illegal music/porn/does other bad stuff?


This is where my hypervisor/virtualization approach comes in. Except it is different from the virtualization architecture on desktop/server platforms.

A traditional desktop/server platform virtualization architecture looks like this:

This is a well established architecture for the server/desktop platforms.

So how can mobile virtualization lend towards solving the end point convergence issue? Simple. I think there is an opportunity to leverage mobile virtualization to sandbox the corporate virtual stack (VS) from the “personal stack”.  In theory, it should be possible for an enterprise/agency to:

1) Develop a virtual software stack (VS) that includes apps, configuration, access layer, corporate data (offline cache of corporate data in the cloud/repository).

2) Deploy this corporate VS on top of a standardized mobile hypervisor using common APIs/Services.

3) Corporate VS to co-exist with the out of the box/personal stack that includes the individual’s personal data/music/pics/feeds/apps, etc.

4) Set a virtual firewall between VS’s so apps cannot directly access data/resources across the stacks

5) Allow policy based ability to set app and VS permissions (can GPS data be returned to a corporate VS app? Can a corporate certificate be accessed by a person app? etc.).

6) Encrypt the entire corporate VS on the phone with FIPS 140-2 compliance,

7) Provide a remote “kill switch” within the hypervisor layer to remote wipe or lock the corporate VS, without affecting the individual’s personal stack,

8 ) Connect the encryption key to a corporate common ID/CAC/RSA token type of solutions that can be revoked remotely as needed.

There are also some key differences that will require some invention in this space.  Some of which include:

1) The “Hypervisor Administrative Interface” is typically an on-board software toolset (Such VSphere Server Administrator console, or VMWARE Fusion on the Mac, etc.  However, mobile end points are successful in a “Just Works” type of an environment.  End users should not have to worry about which VM to kick off or restart, which one should be accessed when, and how to change resource access settings, etc.  In theory, the “Hypervisor Administrative interface” should be a remote service for mobile platforms, similar in design to how the BES and Exchange Active Sync Administrator work.  A centralized administrator in the Corporate mobile/NOC should be able to remotely push the corporate stack, manage it, trouble shoot it, etc. without on-device intervention by the user.  Of course, this requires ubiquitous connectivity or device tethering to an iTunes or BlackBerry desktop client type of interface.

2) Mobile platforms are all about UI.  Again, this goes back to the “Just Works” model. If this virtualization approach leads to different, fragmented UIs, and inevitably, with ugly, outdated corporate UIs, it will erode the usability and adoption of mobile platforms.  If the user has to shut off their music playback to kick off a series of steps, enter a password, shutdown/start services to access the corporate stack, which looks/feels/acts ugly and frustrating, to access an app to perform a transaction and then all the way back again, it would be extremely frustrating.  Moreover, cross device services like badges, notifications, alerts would not work unless you are “logged into” the corporate stack, again, taking away from usability.  The solution then is to invent the ability for the corporate stack to “hook into” a Common UI layer for the mobile platform.  The end user experience should be seamless (they hooked me up and all these new apps magically appeared that give me access to my corporate stuff).  I think this is important for true and convergence.

The mobile virtual architecture will then look something like one of the following two approaches:

Option 1: Deep Hypervisor



1) No need to define an industry wide Hypervisor API standard

2) Can leverage existing UI and OS hooks in most OSes allowing inter-stack messaging

3) OS “awareness” of hypervisor minimal resulting in little development effort from existing platforms


1) System stability and security weaknesses due to distributed control over underlying resources

2) Potential performance challenges since there is no centralized dispatching and throttling

3) Requires multiple versions of the hypervisor, one for each platform, resulting in potential fragmentation and dilution of effort towards the platform

4) Requires very deep hooks into the hardware/software platform to install/manage.


Option 2: Higher level Hypervisor


1) Platform stability – Resources managed and dispatched by OS kernel

2) High level hypervisor resulting in easier development/extensions

3) Standards based hypervisor usable across platforms

4) As long as platform OS supports common API, hypervisor can be installed as a high level service, without requiring deep hooks.



1) Requires an industry accepted open standard for hypervisor API – Never easy

2) Requires platform OS to be hypervisor “aware” and implement the hypervisor API

3) Potential performance impact due to higher level software hypervisor




Ultimately, in either design, this concept would require device and platform manufacturers to open up their kimonos far wider than thay are currently used to.  Standard Android based platforms may be easier to implement than the blackberry and iOS platforms.

Of course, there is always the third option of have a very high level hypervisor which effectively runs as an app on top of the OS API, and enables the corporate stack on top of itself, but I don’t think that would ever be a solid, high performance, acceptable platform due to its inability to deliver core enterprise functionality, encryption, policy and user experience.  Besides, it would never have great performance.

I think we are at the very early stages of this strategy, but I think there will be great momentum over the next year to explore these options.  I think we will use the first commercially available solution in the next 6-12 months, and a multi-platform standard set of multi-vendor offerings in 2012.

And yes… I know there are a LOT of rough edges here and smarter people than myself can tear this whole theory apart in about 10 seconds.  However, I also think that these issues can be resolved with some thoughts and clever design.

Ideas? Thoughts? Better way of doing this? Please share