View Sidebar

Post Tagged with: Big Data

Enterprise Ready Hadoop Infrastructure from EMC – Isilon

Enterprise Ready Hadoop Infrastructure from EMC – Isilon

With increased reliance on technology and large scale usage of applications and IT systems, the amount of structured & unstructured data stored and processed by a typical modern-day enterprise has been growing very rapidly. Organizations today, lest they’re okay with the idea of being left-behind in the race, require highly efficient, effective and scalable storage solutions to manage this growth.

Modern day organizations require high-end storage systems also because the latter helps provide powerful analytics; they can draw information of concern from data. EMC Isilon scale-out Network-attached storage (NAS) with native Hadoop Distributed File System (HDFS) provides Hadoop users access to shared storage infrastructure that helps minimize the void between Big Data Hadoop and IT analytics.

The lsilon NAS integrated with HDFS offers customers a solution to accelerate enterprise ready development of Apache Hadoop. Until now, customers of Hadoop have benefited from storage infrastructure solutions that weren’t really optimized for big data storage, thus limiting the scope of Hadoop’s applicability in large enterprises. But, EMC Isilon with native HDFS tackles this challenge well and offers an all-inclusive enterprise ready storage system to collect, protect, analyze and share data in Hadoop environment.

Enterprise Ready Hadoop Infrastructure from EMC - Isilon

By integrating Hadoop natively in an enterprise-class storage solution, Isilon has enabled customers to benefit from a comprehensive data protection system (irrespective of the size of the Hadoop data). By combining EMC Isilon scale-out NAS with native HDFS, EMC will be able to reduce the complications related to Hadoop usage to allow enterprises to extract valuable data from the gigantic heaps of unstructured & structured data.

EMC Isilon provides Hadoop customers a built-in entrance to enterprise data protection; this is made possible with the integration of Isilon scale-out NAS storage system and native HDFS. This integration of Isilon and HDFS eliminates any one point failure with open source Apache Hadoop that enterprises are using; further, the combination allows customers to use a Hadoop system of choice to accelerate their Hadoop adaptation in enterprise ready environment.

Industry’s first scale-out storage system with native HDFS offers the following advantages: 

  • Enterprises can utilize more benefits of Hadoop
  • Reduces risks
  • Increases organization knowledge

The reason why enterprises need to consider ‘HDFS plus Isilon’ is that there’s no ingest necessary anymore. It’s comparatively cheaper and still, the performance is better. With multiple enterprise-features, multi-protocol access and Hadoop multi-tenancy, ‘HDFS on Isilon’ supports nearly everything you’d possible want to work with such as Pivotal, Apache, Cloudera and Hortonworks. NameNode SPOF and 3x Mirroring, two key challenges with DAS Hadoop are eliminated too!

Advantages of EMC Isilon storage implementation over traditional implementation

  • It offers scale-out storage to facilitate multiple workflow and applications
  • No downtime associated, it is distributed in NameNode
  • Provides matchless storage efficiency
  • Offers independent scalability to compute and store separately
  • Provides end-to-end data protection using SnapshotIQ, SynclQ and NDMP Backup

Benefits an enterprise derives from data storage & analytics solution – Hadoop

Hadoop as an enterprise ready big data analytics solution can help store, analyze, structure and visualize big amounts of structured & unstructured data. Hadoop is especially beneficial because it enables users to process unstructured big data, to give it structure so that it can be used for the advantage of the enterprise.

a)   Benefits an enterprise derives

  • Enhanced business agility
  • Easier data management
  • Faster and more convenient data analytics
  • Reduction in time and cost of infrastructure and maintenance
  • Ability to accommodate and analyze irrespective of type or size

b)   Hadoop enterprise ready EMC Isilon advantages:

  • Dependable security
  • Scalable storage solution
  • Continuous availability
  • Existing infrastructure and simple integration
  • Easy deployment and faster administration

EMC Hadoop Starter Kit (HSK)

For extracting insights on customer sentiments and other such information from big data, you will need the Hadoop integration if you are an enterprise that uses VMware Vsphere and/or EMC Isilon . Hadoop with Isilon integration becomes enterprise-ready and helps your data architecture deal with new opportunities provided by data most diligently along with the existing tasks.

Now, to make things even simpler for an organization that uses VMware Vsphere and EMC Isilon, an EMC Hadoop Starter Kit has been developed (video). This HSK step-by-step guide is designed to help enterprises learn and discover the all encompassing potentials of Hadoop.

VMware has also started an open source project (called Serengeti) that can help automate the management and deployment of Hadoop clusters on vSphere. With a virtualized infrastructure, Hadoop can be run as a service.

Whether you are a seasoned Hadoop user or a newbie, all can equally benefit with the HSK because of following reasons:

Rapid provisioning: Most of the Hadoop cluster development can be automated with expertise. Thus, the guide takes you through the process of creation of Hadoop nodes and to set up and start Hadoop service on a cluster, which makes it ever so simple for you to execute.

High availability: High availability protection with use of virtualization platform ensures that single point of failure in Hadoop storage solution can be protected.

Profitability: Enterprises can use and benefit from any Hadoop distribution within the big data application lifecycle; this, with zero data migration.

Elasticity: The same physical infrastructure can be shared amid Hadoop and other application, since, the Hadoop capacity can be scaled to and fro according to demand.

Multi tenancy: Hadoop infrastructure offers multi tenancy option, which means different tenants can have virtual machines provided to them, thus enhancing data security.

EMC Hadoop Starter Kit combines the benefits of VMware vSphere with Isilon scale-out NAS in order to help achieve big data storage goals and added analytics solution.

Some of the reasons why the HSK can be considered as the outright solution have been mentioned above. The merits, especially ‘profitability,’ explains that users can use Hadoop distribution all through the big data application lifecycle with zero data migration that includes, Hortonworks, Pivotal HD, Cloudera and Apache Open Source etc.

This means that starting Hadoop project with EMC Isilon scale-out NAS, enterprises can profit with zero data migration when they have to move from one Hadoop distribution to another. This implies that user can run multiple Hadoop distributions for same data without data duplication.

EMC Isilon’s Notable Collaborations

In addition, Isilon also shares a good collaborative effort with companies like Splunk, Rackspace and Rainstor. EMC Isilon scale-out NAS is no doubt the finest storage system offering users an opportunity to scale capacity and performance of data to meet their needs. To benefit Hadoop users, Isilon has teamed up with Splunk, Rackspace and Rainstor for additional benefits.

Isilon and Splunk: Splunk for Isilon app integrates EMC scale-out NAS with Splunk. The team up of EMC and Splunk helps enterprises manage avalanche of data across virtual, cloud and physical environments to transform this data into real time insight for the user.

Isilon and Rackspace: EMC Isilon helps enterprises to store, consolidate, analyze and use data and applications exceeding 100 TB. Rackspace offers its services to EMC Islion NL400 and X400 high density and large capacity models to perform their tasks diligently for greater benefit of enterprises.

Isilon and RainStor: The combination of EMC and RainStor helps enterprises run the Hadoop distribution anywhere. The RainStor’s unique data compression technique helps enterprises to analyze their large data sets with more efficiency and greater predictability.

SiSense makes big data analytics possible on a Chip

SiSense makes big data analytics possible on a Chip

Thriving on the concept of ‘big data meets business intelligence,’ a company in Israel has worked out a way to allow enterprises to put analytics in cache memory of a CPU in order minimize cost of hardware purchase that enterprises incur. SiSense big data analytics on a chip If you were to ask someone, what is big data’s biggest drawback? They would outrightly claim it as ‘more hardware’meaning more expenditure! Since, organizations see their big data levels rise more than twice each year, they need ways to store this data. Not just store, but also process, search and analyze data from time to time. All this requires IT infrastructure, which involves a lot of cost. As an innovative stride forward to help organizations save considerably on this cost of hardware, SiSense, Big Data Analytics Companyhas designed a way to do analytics in the cache memory of Central Processing Unit – Analytics on a Chip i.e. The idea of rethinking business intelligence – with high speed and smaller sets on multicore processors began way back in April when SiSense received $10 million in Series B funding from Battery Ventures along with Opus Capital and Genesis Partners. SiSense has basically worked out a technique to make analytic software work on multicore processors, say a parallel computer cluster on a Chip. Explaining the efficiency of the system, SiSense CTO EldadFarkash says that their big data analytics on a chip system can run queries against almost 20 terabytes of data on the Chip. According to Farkash, their technology works on Intel and AMD multicore 64-bit architectures and is probably the first to support Intel’s new Haswell architecture. Explaining the applicable use of the technology in the future, Farkash says an analytics system like theirs would be something we will carry in the palm of our hands, it will be a system that will work on new age iPads and Android tablets with terabytes of storage. Seen from a broader prospective, SiSense could be a deviation from high-performance computing ofbig data on Hadoop Hive. This could really save enterprises a lot that they end up spending on maintaining their data and analyzing it using various platforms. SiSense makes for a good way to diverge big data analytics beyond Hadoop hives, your take? Via: InformationWeek

Big data to drop insurance costs for young drivers

Big data to drop insurance costs for young drivers

Car insurance firms in most parts of the world are using big data to monitor driver’s behavior and assist them in lowering insurance costs. Big data is also helping them improve their driving. 

Insurance firms like the Progressive in the United States, Generalli Insurance Group in Italy and Tesco Bank in the United Kingdom are employing different way to track their customers driving routine and are using this data to minimize insurance costs for better drivers.

The idea of monitoring individual drivers is to offer them lower insurances prices and also to make them better drivers. The Insurance companies believe that by observing directly how people drive, the firms will be able to change the way insurance works.

The Insurance companies have adapted this new technique of monitoring driver data since years old technique has just recently become affordable. In the first phase, firms for now are trying to convince customers that observing driving behavior is actually a good thing, and customers are only trying to get used to be tracked every second.

Big data telematic progressive

How this works

Insurer Progressive in the U.S boasts of more than a trillion seconds of driving data from a total of 1.6 million customers. To monitor every individual, the insurer installs their Snapshot device – dubbed telematic, in a person’s car to monitor every second data of the speed in which an individual drive, what time of the day they drive. The telematic beeps thrice when brakes are applied suddenly.

The idea behind installing telematic in the cars is simple, to train people to drive better. The Insurer believes observing individual’s driving can help change the way insurance works. They opine that youngster at 18 pay a lot for insurance, but there are some really good and safe drivers at 18 who deserve better deals.

New scope against traditional car insurance

Traditional car insurances are based on the concept of averages and not specifics. All you have to do is fill out a form stating your age, type of car you drive and other things like gender etc and based on this information, your risk is mapped and you are put into certain predetermined insurance slab. So in nutshell, thousands of people with have the same risk insurance, when their driving abilities are different, and risks are different.

Eventually, most people end up over paying for their car insurance!

The applications

With big data technologies like the telematic and other ways of analytics to collect data from social networks and other platforms, insurers will be able to augment risk profiles of individuals and better insurance costs based on this avalanche of data for them.

If data monitoring reveals that a driver drives less frequently, and drives safely, then the person should be able to save on his insurance cost (which should be lesser compared with someone who is similar in age and type of car ownership, but has a rash driving behavior). Using big data analytics is the easiest way to achieve this.

Currently however, only 2 percent of US car insurers are offing driving monitored insurance, but this is expected to rise by 10-15 percent in next five years. The big data analysis besides determining risk profiles can also help determine the driver at fault in an accident.

Source: BBC

Demand for Big Data analysts surges in UK

Demand for Big Data analysts surges in UK

Companies in the United Kingdom are seeking opportunity to take a giant leap ahead in global effort to deal with volume, velocity and variety of data generated each day. But UK companies are finding it hard to get employees with big data analytic and modeling skills that they require, thus escalating the demand for big data specialists.

Big data has made its presence felt and for all possible reasons, it is here to stay. Even technology specialists who previously brushed of big data as a buzzing phrase have acknowledged the importance of big data for enterprises.

big data analytics demand in UK

The global economy has been transforming with big data analytics and remodeling at a brisk pace. To make the most of this staggering data, one-third of large UK companies are geared to adapt big data technologies in the next five years. What is the implication of this paradigm shift on the demand for employees?

According to a joint report “Big Data Analytics: Adoption and employment trends” released by e-skills UK and SAS Institute, with rapid realization of big data technologies in organization in UK, there is a rise in demand to increase development of critical data analytic skills to meet requirement. The report suggests that there is an expected 243% increase in demand for big data specialists in UK by 2017. Almost 69,000 big data experts i.e. will be required by organization to make fact-based business decisions.

The report further reveals that in UK’s large organizations where big data technologies are being implemented, currently there are about 94 crore big data users. This number though is expected to push up by 177% by 2017.

Recognizing how rapidly the economy is transforming with big data technologies, UK’s Minister of State for Trade and Investment, Lord Green believes, this is UK’s best opportunity to deal with volume, velocity and variety of data in order to lead the global vision on big data. For this, Green says, UK’s government, business and academia will have to work in tandem to develop skills that’ll foster development.

Karen Price, CEO e-skills considers big data analytic skills of strategic importance for UK. He believes businesses and government need to give big data skills a strategic importance along with mobile computing, communication and cyber security etc., since these skills will be of utmost relevance in the near future.

Ford’s new green future driven by big data

Ford’s new green future driven by big data

Ford considers big data analytics as the next frontier of innovation and productivity. It has been using big data – in and out of vehicles, to design new age eco-friendly vehicles that will help the environment appreciably.  Talking about green automobile innovations, there is hardly any manufacturer that can come close to what Ford has delivered to the environmentally conscious world in the past few years. There are many outlining things Ford has done differently to get to the pinnacle, but one thing that stands apart from the rest is use of big data to its advantage. Ford Logo Ford has invested a great deal in big data technologies and analytics. This investment has permitted the automaker’s scientists and researchers to understand realistic fuel economy targets and green routing services. In addition to learning about the availability of rare raw materials that go into the making of in-car batteries and powertrains. Ford took the decision of investing in big data on the recommendation of the company’s Research and Innovation Center, which came into existence towards the end of 1990s. The Center began providing small insight based on the information extracted from the avalanche of data. But with time, the group has begun providing broad information on climate science findings and weather trends etc. that can influence Ford’s decision making about developing new products and services. The information derived from diagnosis of big data is helping Ford achieve new standards in green automobile technology. In spite of heavy investments in big data technologies, Ford has not been able to streamline and sort all the data. This data is still stored in different pockets, which makes most information exist in isolation. Ford however is working on ways to find solutions to this. Ford’s new vehicles are producing gigabytes of data each hour. The automaker is working on ways to seek permission from vehicle owners to collect all this data in cloud data centers to analyze it and use the information to add useful green services to its vehicles. Ford also hopes to use the data collected from its fleet to help automate green routing system so that the vehicles can automatically optimize their speeds to create least impact on the environment. Ford scientists, computer modelers, mathematicians and other researchers have determined from big data analyses that Ford, in particular, is better investing and creating vehicles based on alternative engines like hybrid, all-electric, plug-in electric etc.

Jut raises $20 million Series B funding to ride on top of big data arena

Jut raises $20 million Series B funding to ride on top of big data arena

Jut, a San Francisco-based Stealth mode platform developer for enterprise software to handle big data is slated to be a new entrant in the big data industry. To make its way, Jut has secured $20 million in a Series B funding.

Jut is developing big data software for enterprises, in a round led by Accel Partners with LightSpeed Venture Partners and Wing, Jut has raised funds to expand its engineering team and to have the first version of its product hit the market.Jut big data VC funding

Jut, with the funding, hopes to provide technology to enterprises by which they will be able to deal with the avalanche of data – called big data. In event, Jut articulates a vision to ride on top of the big data arena.

It is believed, and Apurva Dave, vice President of Marketing at Jut professes, by 2016 big data will cost over $200 million in IT spending. This bulk spending will not only be on the infrastructure to store and maintain big data, but it will be a spending on acquiring services and products that will help derive meaning from this mammoth data.

Jut will provide enterprises with big data infrastructure that will help them store, analyze and derive meaningful insight from big data.

The funded capital will be utilized by Jut to expand its engineering team. The capital will also be used in order to bring the first version of its product to the market. Jut is developing a development-based company with open-source culture, which thrives on the concept of data-based decision making to provide answers to the hardest question of big data.

10 Must-Know Facts about Big Data

10 Must-Know Facts about Big Data

With the increasing awareness in big data levels, enterprises around the world are realizing the potential of this avalanche of data analytics and are finding ways to unlock business value from all this data.

Big data has become a buzzword in the world of technology of late. Analytics and service providers are busy understanding the various benefits that can be derived out of big data by collecting, analyzing and utilizing this data. But the questions remain – where does this data come from? How can an enterprise benefit from it?  To answer your inquisitiveness we have listed below must-know facts about big data.

bigdata facts

  1. Data experts believe that big data is still in a very nascent stage, and most of the enterprises are thus still contemplating on whether they should adapt, or continue their wait and watch strategy. Everyone does understand that the trend will change and big data has the potential to change the way we interpret data today.
  2. Since big data has become an integral part of a solution to world problems, it is generally improving power consumption, transportation and making social networks ever more efficient to use.
  3. It is estimated that the current global storage capacity for digital information has reached about 1,200 exabytes. To estimate how much that is – if all this digital information was to be placed on CD-ROMs stacked up in piles, it would form approximately five separate piles, each reaching to the moon.
  4. Another exciting fact about big data is that there are nearly as many bits of digital information as much there are stars in the universe. This is the kind of big data we are talking about.
  5. With each passing hour, people around the world consume enough digital information to fill 7 million DVDs. To explain how much that would be – if you put all 7 million DVDs side by side, they would scale up to the Mount Everest about 95 times.
  6. We cannot think of a place where we wouldn’t see people using mobile phones. Using mobile phones is so normal that it is hard to believe that there are still places in the world devoid of mobile phones. Yet, the reach of mobile phones has, and is spreading globally like wild fire. The reach is so magnanimous that according to a report, the number of active cellphones will reach an estimate of 7.3 billion by 2014. This number may be more than the number of people on earth, but it doesn’t mean that everyone on the planet has a phone; this only represents number of users with multiple phones. Internet users on mobile phones and other mobile devices together make up for 36 percent of the world population, which creates billions of digital data pieces every day.
  7. An estimated 247 billion e-mails are generated each day, out of which 80 percent are recorded as spam. In addition to e-mails, Facebook is the second largest data cruncher on the internet. An estimated 30 billion pieces of data are shared on Facebook on a daily basis.
  8. Considering the size of data that is being generated, there are currently over 500,000 data centers across the globe. These are large enough to fill about 5,950 football fields.
  9. In the digital world, about 75 percent of digital information is produced individuals.
  10. Given the rate at which digital data is multiplying, IT departments across the globe will require ten times more servers by the year 2020. In addition, considering the data growth almost twice the number of current data analysts will be required.
11/11/20131 commentRead More
Gartner Big Data 2013: Highlights

Gartner Big Data 2013: Highlights

Gartner’s annual big data survey report for the year 2013 was released recently. As expected, the highlights of the survey were pretty startling. The survey revealed some beliefs in big data backed by evidence.

Gartner Big Data 2013 - HighlightsThe biggest revelation of the year’s Gartner survey was that 64 percent of companies globally have already implemented or are planning to implement big data systems. The percentage reveals that nearly 30 percent companies have already invested in the big data systems and 19 percent are on the verge of investing in the technology over the next one year. Additionally, the survey shows that another 15 percent companies are willing shell out some money over the next couple of years.

The percentage exposed by the survey is a significant number, which goes on to prove that there is a genuine interest amid the companies to imbibe the new big data system. A large chunk of enterprises are looking at ways they are managing their data and wish to hunt for new ways to get the best out of the ever growing data industry.

The surveyed  

Gartner LogoAccording to Gartner, the survey was basically focused on companies (720 Gartner Research Circle members) and was carried out in June 2013. Designed primarily to understand the investment plans of various organizations for big data technologies, what stage of implementation the companies have reached and how the big data is helping these enterprises solve problems.

Despite being a very confined survey, due to the variety of companies surveyed, this survey is a broad and effective representation of how the world of big data is shaping up and how the enterprises (big and small) are adapting it.

The Prominent Findings

The survey reveals that the industries that lead the big data investments for 2013 include media, communication and banking.

According to Gartner, about 39 percent of media and communication organizations vouched to have already invested heavily in big data technologies. 34 percent of banking organizations also said they have made investments in big data. According to the survey, investments for the next couple of years are majorly lined up in the transportation, healthcare and insurance sectors.

What Is Instigating Companies To Invest In Big Data?

Following a strong precedent set by the billion dollar companies like Google and Facebook, almost all enterprises worldwide have understood that big data usage can have a significant impact on revenue. Therefore, it is not a surprise that more and more organizations are looking to invest in big data.

 Big data in most cases, if analyzed and used properly, can help companies learn about customer experience and customer expectations. Big data analysis helps produce highly useful insights that helps companies make really smart business decisions.

When Facebook Concluded Largest Hadoop Data Migration Ever

When Facebook Concluded Largest Hadoop Data Migration Ever

Since the inception of Facebook in particular, days of storing massive data on servers are here. Data content being shared on the internet is growing enormously with every passing day and managing the same is becoming a problem for organizations across the globe.

When Facebook Concluded Largest Hadoop Data Migration EverFacebook recently undertook the largest data migration ever.  The Facebook infrastructure team moved dozens of petabites of data to a new a center – not easy, nonetheless a task well executed.

Over the past couple of years, the amount of data stored and processed by Facebook servers has grown exponentially, increasing the need for warehouse infrastructure and superior IT architecture.

Facebook stores its data on HDFS — the Hadoop distributed file system. In 2011, Facebook had almost 60 petabytes of data on Hadoop, which posed serious power and storage shortage issues. Geeks at Facebook were then compelled to move this data to a larger data center.

Data Move

The amount of content exchanged on Facebook daily has created a demand for a large team of data infrastructure management professionals. They will analyze all the data to give it out to in the quickest and most convenient way. The treatment of such large data requires large data centers.

So considering the amount of data that had piled up, Facebook’s infrastructure team just concluded the largest data migration ever. They moved petabytes of data to a new center.

This was the largest scale data migration ever. For this Facebook set up a replication system to mirror changes from smaller cluster to the larger cluster. This allowed all the files to be transferred.

First, the infrastructure team used the replication clusters to copy and transfer bulk data from the source to the destination cluster. Then the smaller files, Hive objects and user directories were copied onto the new server.

The process was complex, but since the replication clusters minimize downtime (time how quickly both old and new clusters can be brought to identical state), it became easy to transfer data on a large scale without a glitch.

Learning curve

According to Facebook, the infrastructure team has used a replication system like this one previously too. But, earlier, the clusters were smaller and could not accommodate the rate of data creation, which meant these clusters weren’t enough.

The team worked day in and day out for the data transfer. With the use of the replication approach, the migration of data became a seamless process.

Now, the team having transferred massive data to a bigger cluster means that Facebook can deliver absolutely relevant data to all users.

How big data influences the stock market?

How big data influences the stock market?

In the real world scenario, if the market manipulators are removed from the question, buying stock becomes a worthy accusation. By acquiring stock, you place high stake on how the company functions, since the market in itself is a true determinant of the value of the company you’re investing in. But, what if the data, on which the decisions are to made, increases manifold. Read on to know how this instantaneous rise in data influences stock markets.

Due to the excessive dependability and usage of mobile devices, new media and social media, there has been a massive increase in the amount of data being generated and processed. This rising data quality is a result of spontaneous shift of digital occupancy from office to online.

Big Data and Stock Markets

Over the years, the rise in data (dubbed as big data) has been almost a billion times than it was before, say the rise of the mobile phenomenon. Since, rise in data trading volumes have increased a billion times which has brought about a influx in trade transaction by a similar number.

Big data

Big data is humongous data sets which are virtually impossible for conventional technology to process. Big data offers great advantage to companies, only and only if they can have the IT infrastructure and the manual know-how to manage and process this data.

Elements of big data:

  • Over 90 percent of the data in the world has been created in the past two odd years
  • It is estimated that in comparison to 2009, data production will be about 44 times more by 2020
  • The enormous data increase is fracturing IT infrastructure of the stock market

The Stock Market

In the stock market, people buy stocks – one, because they like the company and two, they want to be a part of its growth phenomenon. But, it has been seen that there is already a very limited inclusion of real human wisdom involved in the stock market.

We mean that the number of transactions being transacted on the market by real humans is really a very small fraction of the daily trading volumes. This means the real money game of buying and selling stocks by the most serious traders are being enacted by automated systems that do the most gruesome work. In such hue and cry, where my algorithm on the market is better than yours, the entire stock market has become a robot war and has increased the amount of processable data to an extent which has become difficult for market’s IT infrastructure to handle.

In the stock market, each trade creates a ripple effect, the size increases with the size of trade. When trading happens at speeds beyond control, the ripple effect can confuse other machines, which leads to an influx in buying and selling of stocks until profit making is maximize. Even though the market is influenced by the ripple effect, current stock holders make all efforts to keep the system operational.

However, due to speed, scale and volume of the kind of data being created on the stock market, the problem of mapping the data has been growing manifold.

Big data if not managed properly in itself can be dangerous for the market, and coupled with speed, scale and volume (so large) the effect will surely be something new for the market. With the growing speed the data is making the human touch extremely insignificant in the market. The market is becoming a robotic battlefield, where large IT infrastructure and supercomputer are going to play the money game.