By John Mulhall. How a sober view of current big data technology shines light on a bright future
The wealth of open source innovation and large scale projects by enterprise and open source communities alike has truly caught my imagination. From Hadoop to HBase, numerous innovations have truly been impressive helping big data to become viable for big business. Many things including the emerging partnerships between open source communities and most major technology corporations including Oracle have driven this evolution. The HUG Ireland event sponsored by Oracle at the Morrison Hotel on Monday 12th September demonstrated this.
— BOIstartups (@BOIstartups) September 19, 2016
Mark Rittman, Co-Founder of Rittman Mead kicked off with big questions around the future of analytics and its history from the days of pure relational databases (RDBMS) through to the evolution of Hadoop. He mnetioned the obvious benefits of Hadoop with its fault tolerance, lower running costs, scalability and storage flexibility. This illustrates why Hadoop has become the (EDW) data off load analysis platform of choice for big business. It facilitates a pipeline for the business giving reasonable returns in data driven insights and development potential. It also can be set up at a reasonable running cost. Mark covered newer architectures like the Data Lake and the evolution of frameworks around Hadoop including self-service data discovery and analytics.
He also talked about open source tools like Apache Spark, Kudu and Mesos plus more along with the changes in ETL that go with this new age of big data. He finished up on the case for notebook analytics using the likes of Zeppelin and Jupyter Notebooks along with Oracle’s Big Data SQL. The case is convincing for making the modern data analyst more effective and why using Hadoop as an ETL tool makes sense! Mark delivered two entertaining talks that hit home at some of the big questions in big data today.
The final talk of the evening came from Jeff Richmond, Cloud Enterprise Architect with Oracle UK, who covered the technical areas of the three I’s of Big Data. Jeff’s focus was on the application of big data, how we gain insights, why we use it and how to structure our approach to big data and the stack! He talked about the challenges in transforming insights into actionables and why identity matters. He mentioned the need to know about what we want to achieve with big data and how big data can help us with things like process anomaly detection, fraud detection plus more.
Jeff finished with some points on the bewildering array of technologies that make up a big data stack using Oracle’s B2B (Big Data) offering in the cloud as a standardisation example. He outlined how it is a way to quickly introduce, stabilize and make effective a big data solution for new enterprise entrants into the big data processing and analytics space.
Jeff reflected on the event “hopefully we were able to shed some light on what Oracle is doing in the Big Data space and where we see analytics going. Know what you want to do in business and what part big data can play in helping you meet your goals!
Exciting thoughts for these exciting times making big data a partner for life!