Global and Mobile App Delivery in the Age of IT Consumerization. Part 3: Changes Going Forward and Analytics
In the 3rd and concluding part of this series of posts on Gigaom’s webinar on Global and Mobile App Delivery in the Age of IT Consumerization, sponsored by Akamai, the last items in the webinar agenda are covered: changes going forward and the impact on analytics and BI.
The webinar (now available on Vimeo and embedded at the end of the post as as well) addressed the following topics:
- What is the Application Delivery landscape?
- How are enterprises currently guaranteeing and accelerating performance?
- What new challenges to that model do consumerization, globalization, and mobility bring?
- How should businesses prioritize delivery enhancements?
- What are the most cost-effective enhancements businesses can make today?
- What new management and infrastructure changes will this require going forward?
- What is the impact on analytics and business intelligence solutions?
I see two key areas that could be influenced here – SLAs and Procurement.
In terms of SLAs, i think new SLAs specific to application performance will be needed, and this implies a radical shift. Most SLAs today focus on features of the physical layer to the networking layer such as latency, packet-delivery ratios, and errored seconds. In order to record such metrics, it’s typical to use appliances to capture and monitor network traffic, including specific applications. But as the size of the network and the number of locations grow, this is not always a viable strategy as the cost to deploy and sustain appliances becomes considerable.
In order to get more relevant metrics wrt user experience, it has been suggested that client and server endpoints need to be monitored beyond packet headers and into the content within applications. The growing use of mobile networks and applications and the increased complexity of network environments result in more dynamic IP allocation, meaning users could have several IP addresses during a single session. This makes monitoring relying on IP addresses problematic, therefore introducing the proposal for deep packet inspection as an alternative.
Personally, i am not at all sure this is a viable strategy either. It’s not easy and it should not be taken lightly, as it touches upon issues of network neutrality, privacy and so on. However i think this is something that will have to be dealt with in one way or another in order to derive more meaningful metrics and SLAs
As far as procurement and management are concerened, i think we need a new, more Agile paradigm. The traditional procurement model for application infrastructure is not very well suited for many of today’s organisations and scenarios. In this respect, i believe the Agile model of application development could inform and benefit infrastructure management as well. The Agile philosophy is about not trying to define everything upfront, embracing and managing change and being prepared to engage in subsequent stages of evolution of initial specifications, assumptions and designs. I’m not saying this is a fix-all remedy, and it takes some skill to master this philosophy and apply it succesfully – we have seen cases where “Agile” has become a buzzword used to justify lack of planning and structure.
I do believe however that if applied right and at the right dosage, this type of infrastructure procurement and management can benefit organisations. Instead of trying to cover every conceivable scenario, focus on your day-to-day typical operation, deploy application delivery and hybrid multi-cloud solutions, monitor your performance, and be prepared to revisit your planning based on operational data and your strategic planing.
This is a very interesting question which brings up some significant side-effects. There’s two main aspects that i can think of.
For one, being the control point for the delivery of enterprise applications means you can get insights on many aspects of those apps. One of them would obviously be standard access and performance metrics – things like how many hits has a specific app had, when, from where, how long did it take to respond and so on. The kind of metrics you would get from web server or application server logs actually, except now there could be a central point for that which means you can aggregate on any level you want – department wise, enterprise wise, whatever. That alone would give you better insight, but that’s not all.
There’s more to what we can get when we start seeing the big picture, and that’s correlation and business insights. For example, you may be able to identify access patterns in the use of applications, spot opportunities for cross-sales or sore points for customer or employee experience, and thus help your organization do better. And this is becoming more pronounced – for example we’ve seen lately how big players in the application monitoring space are augmenting their offering to provide business analytics and insights as well. There’s so much value there.
The other thing is what application delivery can do for analytics applications as an end user, so to speak. Like pretty much all other applications, analytics apps are moving to the cloud. This is happening for a number of reasons, some of those are the same that apply to all other applications, one is unique albeit influenced by those, namely the fact that since applications and data are moving to the cloud, it makes sense for analytics to move there as well to be as closely located as possible to their data sources. However, many of the data sources that analytics and BI applications have to integrate are not in the cloud. So the multitude of locations and the potentially overwhelming volume of data that has to be moved around pose a significant challenge for application delivery solutions