Call us Call Us (111) 234 - 5678

Structure Europe 2012

Linked Data Orchestration was present in Structure Europe 2012 as a startup exhibitor, and this is a report from our experiences on the 16th and 17th of October in Amsterdam.Soon we will update the report with CEO & Founder’s George Anadiotis interview on site.

Structure has been a series of key industry events in the US in recent years, organized by GigaOM with a focus on Cloud infrastructure, Big Data and applications. This was the first time the event was held in Europe, and it was an opportunity for us to showcase our proposition and exchange views with thought and industry leaders in the field. Most of the participants fell in one of 3 categories: Infrastructure providers, Application providers and Investors, and there was something for every one of them there.

For infrastructure providers, the hottest topic was probably the emerging trend of Software-Defined Networking, especially after the recent 1,2 Billion $ acquisition of Nicira by VMware. Other vendors are also moving in the same direction, as this signifies the potential for innovation and profit. Another key topic was data center architecture and energy sustainability, featuring among others Facebook’s VP of Infrastructure elaborating on their strategies and being followed only hours later by a very rare public view on traditionally secretive Google’s data center infrastructure.

Being on the application provider end ourselves however, our focus was on issues such as public vs private cloud utilization and implications for European organizations, Big Data processing and the Hadoop vs the World religious was that seems to be raging.

Public vs private cloud utilization and implications for European organizations. All the talk about infrastructure and how it can scale and serve the applications efficiently is apparently necessary and a good thing. For example, many participants stretched the fact that private and public cloud are not necessarily mutually exclusive options, and that federate clouds and mix and match strategies with regards to location and vendor are the ones most likely to prove effective. However once more it seems that technological evolution may be leading the way, but adoption and mindset evolution is lagging.

For example, in Cloudyn’s report on cloud utilization due next week, findings seem to abide by the 80-20 rule: as reported by Cloudyn CEO Sharon Wagner, only 20% of compute nodes are optimally utilized, and 20% of storage nodes are not used at all. This highlights the fact that cloud technology, in addition to being not fully adopted, is also not fully exploited. According to 451 Group, the 3 main factors that inhibit cloud adoption are learning curve, complexity and cost. This is natural and can be attributed to the fast pace at which change is taking place, which leaves little breathing space for most organizations that see IT as an enabler, not an end in itself. Our approach on this is simple – we do all the hard work of utilizing the cloud for you, you use our applications and reap the benefits.

Big Data processing and Hadoop vs the World.If this (“Hadoop vs the World”) seems a bit over the top to you, don’t get us wrong: as far as we are concerned, Hadoop is a very successful and efficient implementation of a parallel programming paradigm that is easy (in comparison) for real-world software engineers to get their arms around. That said however, the problem comes in a way from its own success: when someone has a hammer, every problem starts looking like a nail, and it seems there are many organizations with Hadoop hammers in their hands.

So, Hadoop vs the World is great for ETL and arbitrary, unstructured data storage and processing – if combined with the ecosystem comprised of HDFS and Hive / Pig. However, when talking about specific domain applications, domain knowledge becomes the driving factor and Hadoop’s schema on read advantage becomes questionable. Some memorable quotes on the topic:

“With Big Data comes Big Fragmentation, so you need domain knowledge. In order to have data-driven applications, you need some sort of distributed metadata fovernance”.

Mats-Olov Eriksson, Director of Data Warehousing,

“The Holy Grail is to have some sort of metadata driving the process”.

Zdenek Zvoboda, VP, Platform, GoodData

“We want our clients to focus on their analytics, so we need to put all the elements together. One way of speeding things up is applying domain knowledge”.

Phil Francisco, VP, Big Data Product Management, IBM

This is a topic that spanned the conference and obviously deserves some analysis of its own. We also think that domain knowledge is in the center of the process, therefore the domain model should be as well-thought, elaborate and at the same time flexible, and this is also at the center of our approach.

Finally, from the investor point of view there were also many highlights, such as the Exhibition  space where Linked Data Orchestration was featured, the Launchpad and panel discussion on cloud startups in Europe, and the Rockstart answers workshop held during the conference. This is definitely a domain worth keeping an eye on for investments, as the market is getting to be as Big as the Data and there are many innovative value propositions out there.

All in all, it was a well organized and thought provoking event that functioned well on every level. Looking forward to the next one already, but we’ll definitely be active in the meanwhile as well.

One Response so far.

  1. […] business model to include events. Last year i had the chance to participate in the first ever Structure:Europe event by GigaOM: the european incarnation of the flagship Structure events that have been rocking […]