SiliconANGLE http://siliconangle.com Extracting the signal from the noise. Fri, 30 Sep 2016 00:58:12 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.4 New York minute: The blink-and-you’ll-miss-it pace of change in data monetization | #BigDataNYC http://siliconangle.com/blog/2016/09/29/new-york-minute-the-blink-and-youll-miss-it-pace-of-change-in-data-monetization-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/new-york-minute-the-blink-and-youll-miss-it-pace-of-change-in-data-monetization-bigdatanyc/#comments Thu, 29 Sep 2016 23:34:13 +0000 http://siliconangle.com/?p=298864 The BigDataNYC 2016 conference wrapped today after mult […]]>

The BigDataNYC 2016 conference wrapped today after multiple days of conversations and controversy over the destiny of data for enterprises. Everyone offered their own opinions on where the gold is today and where it might be tomorrow. Outstanding are a couple of trends that seem to be solidifying enough to last until the 2017 conference (maybe).

Dave Vellante (@dvellante), cohost of theCUBE, from the SiliconANGLE Media team, remarked that it seemed only yesterday companies predicted they’d profit from selling their data. “A lot of companies made the mistake early on of, ‘OK, well how are we going to monetize our data?’ Well, you can’t. You’re going to go compete against data markets?” he said.

Cohost Peter Burris (@plburris) clarified that  companies will monetize data through mixing it into their business models, not through sticking a price tag on it. “There was this whole notion of the data economy where everybody was going to sell data to each other,” Burris recalled.

“And a good data scientist gets in the middle of that and says, ‘Yes, please, because I’ll take all your data, and I will re-engineer your customers, what your customers want, what your customers are buying — I will take all your customers away from you in a week and a half,'” he said.

Open-source fatigue

Vellante and Burris lamented the “broken promises” of open source and its failure so far to solve Big Data problems at scale.

“A lot of hard work is going to go into making all of this stuff deliver on these enormous promises that, quite frankly, will deliver. But it’s just going to take a little bit of time,” Burris said.

Vellante said services that deliver all-in-one solutions are looking more attractive than open-source lately. “People don’t really know how to use Flume and Scoop and Hive and Pig and all these other toolsets, so what do they do? They call Cloudera,” he said.

Stay tuned for the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/new-york-minute-the-blink-and-youll-miss-it-pace-of-change-in-data-monetization-bigdatanyc/feed/ 0
Starting the ignition for cloud-powered car dealers | #splunkconf16 http://siliconangle.com/blog/2016/09/29/starting-the-ignition-for-cloud-powered-car-dealers-splunkconf16/ http://siliconangle.com/blog/2016/09/29/starting-the-ignition-for-cloud-powered-car-dealers-splunkconf16/#comments Thu, 29 Sep 2016 22:32:25 +0000 http://siliconangle.com/?p=298797 Asked what physical devices they think of first in term […]]>

Asked what physical devices they think of first in terms of dependency upon cloud-stored data, many people would likely answer with phones or other small, portable devices. But as the cloud industry continues to mature, it’s finding interest from those who handle much larger machines.

At the Splunk.conf 2016 convention, Steve Hatch, manager at Cox Automotive Inc., met with John Walls (@JohnWalls21) and John Furrier (@furrier), cohosts of theCUBE, from the SiliconANGLE Media team, to discuss his company’s use of data management and Splunk platforms to enable connections throughout the lifetime of an automobile.

Cox’s work

Hatch began by laying out the essence of Cox’s business model. “Cox Automotive represents the ecosystem of a car, from the dealer’s side or consumer’s side, from when the manufacturer produces the car, and all the different services that a dealer would leverage, up until that car is sold,” he said.

He continued: “And then all the functions that a consumer will use, whether it’s insurance and ownership of a car, parts and services, and once they decide to sell or trade that car back to a dealer, it goes right back through that life-cycle again. Eventually, salvage and recycle.”

Splunk’s role

Hatch proceeded to examine how the data garnered from all of these sources and interactions could be used to power their sales and outgrowth. “By way of Splunk log analytics, business analytics, you can take that data by way of marketing that would then influence specific traffic on your websites,” he said. “Those websites turn into transactions, which then produce something that can be either purchased or sold, that will deplete someone’s inventory, and then that provider can then fulfill it automatically, because they’re already aware.”

Hatch continued: “Initially, it was a matter of putting all of this data into one centralized location. And the way that we’ve leveraged Splunk is by way of Splunk Cloud. Splunk Cloud allows us to not have to worry about so many of the on-premise challenges of firewalls, different data centers and different security policies, where everyone has an Internet pipe, send all that data to a common place, and from there, now we can search an index against that data.”

Finding the value

Hatch also shared some of the indispensable needs that had to be met to best acquire and use this data. “You have to get out there and sell it, you have to establish partnerships, then you have to ingest all that data,” he said.

“Ingesting the data, on-boarding your users, is just one aspect of it,” he noted. “Where we’re going now is getting the value out of it, because over time, the CTO who signed this deal [asks]: ‘Where’s that value? Great, you have terabytes and terabytes of data, where’s my ROI?’”

Moving forward

Asked where he sees the future room for expansion as being, Hatch covered a number of areas, beginning with security, and followed by “the business analytics that can go directly into business, specifically marketing the sales. Allow them to leverage this tool, which is not only geared towards the technical … so they can build out their own dashboards and templates to actually get value out of Splunk without ever having any kind of technical discipline.”

Hatch continued: “And also, it’s a matter of making sure that Splunk can be possibly that platform that allows our architects to now go back and get everyone standard on key performance indicators, to allow all business units to now have indicators that are aligned. The more we are aligned as a company, as an enterprise, we can get that much more value out of Splunk.”

Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of Splunk.conf 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/starting-the-ignition-for-cloud-powered-car-dealers-splunkconf16/feed/ 0
Collaborating to drive data cataloging | #BigDataNYC http://siliconangle.com/blog/2016/09/29/collaborating-to-drive-data-cataloging-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/collaborating-to-drive-data-cataloging-bigdatanyc/#comments Thu, 29 Sep 2016 22:22:57 +0000 http://siliconangle.com/?p=298853 Alation, Inc. recently announced Q4 plans to release it […]]>

Alation, Inc. recently announced Q4 plans to release its Alation Data Catalog 4.0 with Alation Connect, a new connectivity layer that catalogs queries from popular compute engines, including Presto, SparkSQL and IBM Watson DataWorks. Teradata Corp. has partnered with Alation to re-sell the Alation Data Catalog to Teradata customers and prospects, especially those in data environments that have grown more complex with big data. The exponential growth of data by volume and type makes it necessary to provide referential resources for collaboration among enterprise users.

Stephanie McReynolds, VP of Marketing at Alation, and Mark Shainman, marketing director at Teradata, joined Dave Vellante (@dvellante) and Peter Burris (@plburris), cohosts of theCUBE, from the SiliconANGLE Media team, during BigDataNYC 2016 to discuss their partnership, how Data Catalog works for customers and how to handle big data.

Do you have a data lake or a data swamp?

Vellante brought up the point that while there is much complaining about Hadoop, including its data lake concept, it did get the data to where it needed to be. How companies deal with that data after collecting it is the issue, and that’s where Alation and Teradata come into play.

“Is it a data lake or a data swamp? … Different organizations are [all] at different phases of figuring out the data lake … [but they all] need governance,” said McReynolds. The more users that come into the lake, if there’s no way for them to see what’s already in the lake and what the quality of that information is, that data, so carefully collected, can be useless. So it’s necessary to have “a catalog that reads and interprets data … as we get more people running queries … we need something like a data catalog to see and understand what’s in there,” continued McReynolds.

Presto (an open source SQL query engine that Facebook developed) was designed and written for interactive analytics and approaches the speed of commercial data warehouses, while scaling to the size of organizations. “[Presto was built by Facebook], then they open-sourced it. [Teradata] is a major contributor to the code base,” said Shainman. Teradata sees Presto as filling a specific niche, primarily running interactive queries against large sets of data with low latency and many users.

Handing Big Data

The discussion moved to Teradata’s play in Big Data. Vellante asked, “What’s the most important part of your Big Data?”

Shainman answered: “Hadoop and Big Data are all synergistic to the data warehouse … [we realize] that multiple platforms are going to exist in one organization. … We’ve moved away from this silo[ed] set up … Alation brings in the governance and cataloging.”

Stay tuned for the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/collaborating-to-drive-data-cataloging-bigdatanyc/feed/ 0
Is your Big Data strategy a $15 million Excel download? | #BigDataNYC http://siliconangle.com/blog/2016/09/29/is-your-big-data-strategy-a-15-million-excel-download-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/is-your-big-data-strategy-a-15-million-excel-download-bigdatanyc/#comments Thu, 29 Sep 2016 22:13:38 +0000 http://siliconangle.com/?p=298805 Customization is a funny thing in that it’s never […]]>

Customization is a funny thing in that it’s never really finished — not for a living enterprise with an evolving set of problems to solve. A one-of-a-kind Big Data program for a specific business’ concerns sounds neat, but the expiration date is a downer: the first day they have a question it’s not programmed to answer. So should companies start every analytics project from zero, or is there a middle path?

Nenshad Bardoliwalla, cofounder and chief product officer at Paxata Inc., said there have been two ways enterprises have come at data analytics. The first is “we’re going to know all the possible questions that people are going to want to ask in advance. We’re going to pre-program the ETL routines, we’re going to put in something like a micro strategy or business object, an enterprise reporting factory tool,” he said.

Bardoliwalla explained to George Gilbert (@ggilbert41) and Dave Vellante (@dvellante), cohosts of theCUBE, from the SiliconANGLE Media team, during the BigDataNYC event the inevitable outcome of this: The users decide they want to ask a new question or attack a problem from a different angle.

“It takes them about five minutes to determine that they can’t do it for whatever reason. And what is the first feature that they look for in the product in order to move forward? Download to Excel,” he said. “So you’ve invested $15 million to build a ‘download to Excel’ capability, which they already had.”

A road out of ‘Excel hell’

The second approach, said Bardoliwalla, is known as “Excel hell,” where everyone in the organization is using and modifying data in Excel sheets, often with conflicting results.

He said Paxata’s point-and-click visual interface provides a middle path. First, customers whittle down the data quickly. “You look at an age column, let’s say, and there are values in the age column of 150 years,” he said. “Customers at the banks we work with are not 150 years old.”

With that done, users can then decide what questions to ask and record the results. They can also track the outcomes of any insights they operationalize, which enables transparency and consistency across the organization.

Stay tuned for the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/is-your-big-data-strategy-a-15-million-excel-download-bigdatanyc/feed/ 0
Nvidia demonstrates self-driving car that learned entirely from watching humans http://siliconangle.com/blog/2016/09/29/nvidia-demonstrates-self-driving-car-that-learned-entirely-from-watching-humans/ http://siliconangle.com/blog/2016/09/29/nvidia-demonstrates-self-driving-car-that-learned-entirely-from-watching-humans/#comments Thu, 29 Sep 2016 22:01:04 +0000 http://siliconangle.com/?p=298824 Nvidia Corp has released a new video showing off its la […]]>

Nvidia Corp has released a new video showing off its latest progress in autonomous vehicle technology, this time with a self-driving car that has learned entirely from watching human drivers.

“In contrast to the usual approach to operating self-driving cars, we did not program any explicit object detection, mapping, path planning or control components into this car,” Nvidia explained in the description for the video. “Instead, the car learns on its own to create all necessary internal representations necessary to steer, simply by observing human drivers.”

In the video, Nvidia shows that its self-driving car is able to navigate through complicated road environments, including un-lined streets, construction zones, blind corners and so on. The company even noted that while the car was trained in California, it was able to drive in New Jersey without difficulty.

According to Nvidia, the car’s AI learned how to drive with data from only 20 example runs by different drivers at various times of day. “Learning to drive in these complex environments demonstrates new capabilities of deep neural networks,” Nvidia said.

Earlier this year, Nvidia wrote a blog post in which it outlined its process for training its self-driving car, which involves the use of a convolutional neural network (CNN) that taught the car how to drive. The research team trained the CNN with driving footage that was shot from a front-facing camera, which was then synced with steering data that was recorded from the drives.

“Our engineering team never explicitly trained the CNN to detect road outlines,” Nvidia said in its blog post at the time. “Instead, using the human steering wheel angles versus the road as a guide, it began to understand the rules of engagement between vehicle and road.”

Nvidia’s leap into machine learning

Nvidia has been a household name with PC gamers for some time now thanks to its high-end graphics processing units (GPUs), most notably the GeForce series, which  but the company recently made a splash in the enterprise by elbowing its way into the machine learning sector. As it turns out, those same GPUs that power bleeding edge gaming rigs are also excellent for machine learning research.

Nvidia has also on its way to becoming a major player in self-driving cars, and last month the company announced a new partnership with Chinese tech firm Baidu Inc., which would combine Baidu’s cloud-updated 3D maps with Nvidia’s hardware and machine learning capabilities.

If you want a more in-depth explanation of how Nvidia developed its self-driving AI, you can read the company’s research paper, which is titled  “End to End Learning for Self-Driving Cars.” You can also watch Nvidia’s demonstration video below:

Image courtesy of Nvidia Corp
]>
http://siliconangle.com/blog/2016/09/29/nvidia-demonstrates-self-driving-car-that-learned-entirely-from-watching-humans/feed/ 0
Creating new tools to bring together the data science community | #BigDataNYC http://siliconangle.com/blog/2016/09/29/creating-new-tools-to-bring-together-the-data-science-community-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/creating-new-tools-to-bring-together-the-data-science-community-bigdatanyc/#comments Thu, 29 Sep 2016 21:54:16 +0000 http://siliconangle.com/?p=298843 Data science is a land with strange and shifting border […]]>

Data science is a land with strange and shifting borders. It’s hard to say what makes a data scientist, as the skills required vary from one project, and one company, to the next. Further, the tools and technology involved are changing as quickly as anything else in the computer world. Bringing some definition and stability to the data science community is a necessary step in the evolution of the field.

To gain some insight on the world of data science, Dave Vellante (@dvellante) and Jeff Frick (@JeffFrick), cohosts of theCUBE, from the SiliconANGLE Media team, visited the BigDataNYC 2016 conference in New York. There, they talked with Armand Ruiz Gabernet, lead product manager, IBM Data Science Experience, at IBM.

Introducing the Data Science Experience

The conversation started with a look at a new tool developed by IBM, the Data Science Experience. Gabernet explained that IBM had seen a big gap in the tools used by data scientists, and difficulties in getting those tools to work together. The company thought to create something new, a system with a clean, nice UI based on open-source code. This became the Data Science Experience.

The Data Science Experience features a big community component. Gabernet pointed out that a big part of data science work was going online to find the most recent information and solutions. Now, IBM brings it all in through this new tool. He stated users can start working and collaborating with one click.

Collaboration, science and community

Gabernet mentioned the big question surrounding the field: What is data science? The concept is evolving. Companies have teams of data scientists with different skills. IBM is trying to bring them all into one platform.

“We had the feeling we were doing the right thing, but we’ve had it confirmed by the community,” he said.

To be a data scientist today is hard because you have to learn new stuff every week, Gabernet explained. It’s hard to keep up. There is automation, but the machine learning process is never-ending. Data scientists are always coming back to improve their accuracy. The process is endless, and they’re never finished in their work, he added.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/creating-new-tools-to-bring-together-the-data-science-community-bigdatanyc/feed/ 0
Cyberbit and ETA to develop cybersecurity training range http://siliconangle.com/blog/2016/09/29/cyberbit-and-eta-to-develop-the-first-u-s-based-standalone-cybersecurity-training-range/ http://siliconangle.com/blog/2016/09/29/cyberbit-and-eta-to-develop-the-first-u-s-based-standalone-cybersecurity-training-range/#comments Thu, 29 Sep 2016 20:27:36 +0000 http://siliconangle.com/?p=298794 Cyberdefense may not be quite as glamorous as it is por […]]>

Cyberdefense may not be quite as glamorous as it is portrayed in movies from the 1990s (watch Hackers with Jonny Lee Miller and Angelina Jolie anyway). But unlike the 90s, cybersecurity has become an endurance profession.

Not a week goes by that yet another big company doesn’t announce a security breach leaking thousands of customer records into the wild. In an effort to prepare the next generation of cybersecurity warriors, Electronic Technology Associates Inc.  (ETA) and Cyberbit Ltd. have joined to build the first live, standalone, hands-on cybersecurity training center in the U.S.

The location in Baltimore, MD, places the ETA Cyber Range within what Bloomberg calls the Silicon Valley of Cybersecurity and in close proximity to Fort Meade, MD, and the Aberdeen Proving Ground, both of which are hotbeds for U.S. government cybersecurity operations. The Cyber Range will initially employ 10 highly trained cybersecurity professionals as staff and expects to employ as many as 100 by the end of 2017.

“Public sector organizations manage highly-sensitive infrastructure and cannot afford to have staff’s first encounter with a threat occur during attacks,” said Adi Dar, chief executive of Cyberbit. “By training and simulating the response process in advance, security staff can dramatically improve their performance. I am looking forward to helping Baltimore’s industry create top-notch security experts by means of the Range platform.”

Although the location puts the ETA Cyber Range within striking distance of many public sector cybersecurity operations, enterprise business could benefit hugely from the exact sort of training as they are also targeted just as often.  In 2015, U.S. businesses saw an average of 160 successful cyber attacks per week.

When it comes to enterprise applications it’s an online world and while that world is no longer the Wild West of the web, it’s still a place filled with threats that can cost businesses time, money and customers. To combat this, the enterprise sector is spending more money than 2015 to secure their infrastructure. According to a forecast from Gartner, security products and services are set to reach $81 billion in 2016, an increase of 7.9 percent over 2015. And the market is expected to soar to $170 billion spending by 2020.

As more and more customer data is digitized and put online and more enterprise business turn to networks to enhance communication the costs of cybercrime will continue to increase. In 2015, British insurance company Lloyd’s of London estimated that cyberattacks cost businesses as much as $400 billion a year.

An example of possible training courses at the ETA Cyber Range. Image courtesy of Cyberbit.

An example of possible training courses at the ETA Cyber Range. Image courtesy of Cyberbit.

The Cyber Range Platform

While a tremendous amount of cybersecurity is done before an operation puts itself online—setting up firewalls, installing instrumentation, inspecting protocols and essentially putting cyber-bars on doors and windows—once things are running, security professionals are dealing with threats in real time. This means that preparation is indeed half the battle; the other half is knowing what to look for and how to respond.

Speaking to SiliconANGLE, Stephen Thomas, Cyberbit vice president of sales, explained that the ETA Cyber Range will run on Cyberbit’s “Cyber Range Platform,” a sophisticated application that can simulate a cyberattack against a network and puts security professionals into situations they can expect during a real attack.

As cybersecurity is team-based, the Cyber Range “allocates individual team members into their roles in the environment,” Thomas said. Professionals get trained and practice as teams, which gives them a chance to experience an attack without the company’s resources being on the line.

The Cyber Range is set up to simulate numerous different types of attacks and for most of them it compresses the time they take down to a few hours or a few days—when in real life, most cyberattacks against an institution may take days or months (most hackers are extremely patient and wait for an opening as opposed to prying one open because that gets an attack noticed). Attacks can be simulated for ransomware, Trojans that have snuck into the network, port scans, SQL injection, Java Applet Send Mail, WMI worms and many more security threats.

While in the Cyber Range, professionals are put into an environment that not only simulates attacks, it also simulates normal operations. This means that the Cyber Range sets up and simulates a real world network, showing traffic going to and from applications. In most real-world situations involving a cybersecurity incident, an attack sneaks in amid normal traffic, training to tell the difference and react swiftly is critical practice.

Thomas also told SiliconANGLE that the Cyber Range is capable of simulating most popular cybersecurity software suites so that teams can practice with their own setups.

As for simulating attacks, Thomas said that the Cyber Range can pit a team of professionals against a human team (in cyberwarfare terms this would be a “Red Team”) or simply automate the opposing team. While it’s much better training to go up against a Red Team, Thomas joked, the downside is that having humans on the other end of a simulated attack can cause training to go off script. However, both can be crucial to effective practice that simulates real-world scenarios cybersecurity professionals may encounter.

“We’ve even had people bring trainees into the range at 2 a.m., when people are disoriented and have them do a scenario,” Thomas added. “Since in the real world you can’t always expect attackers to work on your schedule and some attacks happen during times when people are less ready to respond.”

photo credit: Cybercrime via photopin (license)

photo credit: Cybercrime via photopin (license)

Just like sports practice, the ETA Cyber Range will have the capability of recording the entire simulation from start to finish by “flight recording” each seat for every team member’s role, Thomas told SiliconANGLE. The staff at the Cyber Range will also be able to sit in a gallery and watch as a team trains on the range and annotate events with their own commentary (all without interrupting the proceedings).

This will provide teams a way to debrief after an incident simulation and do a play-by-play of what happened and enable them to better understand what they missed and how they can do better next time.

For a real-life cybersecurity event this sort of hindsight only happens during the post-mortem of an attack while a cyber forensics team is trying to determine what happened. Being able to review how well a team did against a simulated attack not only provides good training in what an attack looks and feels like, it also provides a way for a team to discover weaknesses in their own understanding, communication or security protocol.

Setting the standard

By providing a standalone Cyber Range, ETA and Cyberbit hope to attract security professionals who want to practice and experience live-fire simulations of cyberattacks in order to hone their skills.

To do this the Cyber Range will have regular training sessions, like a school with a syllabus for training on contemporary cyberthreats, but will also allow enterprise and government teams to buy out time on the range.

No details have been released on how much it will cost for training sessions or free-play practice simulations, but Thomas suggests that the cost will be competitive for the industry.

For more information on the ETA Cyber Range and Cyberbit, check out the website for details.

Featured image credit: leyrlo☂ Computer_stock via photopin (license)
]>
http://siliconangle.com/blog/2016/09/29/cyberbit-and-eta-to-develop-the-first-u-s-based-standalone-cybersecurity-training-range/feed/ 0
Shape Security raises $40M to fight cyber attacks with machine learning http://siliconangle.com/blog/2016/09/29/shape-security-raises-40m-to-fight-cyber-attacks-with-machine-learning/ http://siliconangle.com/blog/2016/09/29/shape-security-raises-40m-to-fight-cyber-attacks-with-machine-learning/#comments Thu, 29 Sep 2016 20:17:28 +0000 http://siliconangle.com/?p=298693 Mountain View-based cybersecurity firm Shape Security h […]]>

Mountain View-based cybersecurity firm Shape Security has just secured an impressive $40 million in Series D funding led by the investment arm of the Singapore government’s economic development board (EDBI).  The funding round also included investments from both Google Ventures and Hewlett Packard Enterprise’s (HPE) Pathfinder program.

According to a statement by Singapore’s EDBI, in addition to its investment in Shape Security, the agency will help the company expand in the Asian market, which has its share of struggles with cybersecurity threats.

“As automated attacks on web and mobile sites become more prevalent and harder to defend against with existing solutions, we believe Shape Security’s highly innovative cybersecurity platform can be a game changer that offers enterprises real-time protection against such threats,” said CHU Swee Yeok, CEO and President of EDBI. “We are pleased that Shape is leveraging Singapore to access Asian customers and partners in the region to advance their global growth strategy.”

On its website, Shape Security notes that today’s top three cybersecurity threats are not manual attacks like in the olden days of hackers, but rather automated attacks that are difficult to stop. These threats include credential stuffing, which uses brute force to break into a system by entering matching pairs of compromised usernames and passwords; content scraping, which rips unprotected text and other content from websites for use in other applications; and application-layer distributed denial of service (DDoS), which overloads a website or other online system by flooding it with millions of seemingly legitimate data requests.

Shape offers several layers of protection with its “security as a service” product, which is aimed at defending websites and servers from these sorts of attacks. Shape’s security service includes a few of the usual features like active threat monitoring, but some of Shape’s other features take advantage of machine learning to continuously adapt to new automated attacks.

For example, the company’s ShapeShifter application subtly alters the source code of a website each time it is viewed, making it difficult for automated bots to accurately read and understand the information, thereby making it harder to exploit.

According to Shape, its service analyzes around 1.1 billion login requests each week for its customers, and its Shape Cloud service handles up to 500,000 requests per second. The company claims that its security features have helped prevent over $1 billion in fraud losses.

The fight never ends

While companies like Shape Security are certainly making impressive strides in cybersecurity technology thanks to machine learning and other innovations, they will not likely be able to claim a victory any time soon, and many security experts are concerned about the possibility of cyber criminals taking advantage of some of those same innovations to develop smarter malicious programs.

Earlier this year, for example, researchers at the University of Louisville in Kentucky published a paper outlining the potential for “malevolent AI,” which would essentially function like intelligent computer viruses. Rather than using brute force as an automated attack, as is the case with DDoS attacks, an AI powered by machine learning could intelligently attack systems and exploit weaknesses with a precision that could be difficult to stop.

This is one reason that a number of tech companies and research groups, such as Elon Musk’s Open AI project, are pushing for ethical guidelines for the development of AI. Of course, these guidelines would only matter if everyone actually followed them.

Photo by perspec_photo88 
]>
http://siliconangle.com/blog/2016/09/29/shape-security-raises-40m-to-fight-cyber-attacks-with-machine-learning/feed/ 0
Why should Google Analytics get all the credit? Tracing the path to purchase with martech | #BigDataNYC http://siliconangle.com/blog/2016/09/29/why-should-google-get-all-the-credit-tracing-the-path-to-purchase-with-martech-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/why-should-google-get-all-the-credit-tracing-the-path-to-purchase-with-martech-bigdatanyc/#comments Thu, 29 Sep 2016 19:26:35 +0000 http://siliconangle.com/?p=298752 A restaurant puts up a billboard for all-day pancakes a […]]>

A restaurant puts up a billboard for all-day pancakes across from a restaurant that serves pancakes until 11 a.m. The billboard shows the restaurant’s name, but not its address. This restaurant asks this survey question on bills: “How did you find us?” A majority of diners who order pancakes after 11 a.m. respond, “Google.” Does the restaurant get greater return from the billboard or from Google optimization?

This little thought exercise illustrates a marketing model known as attribution. Increasingly, in digital marketing, reliance on last-click attribution is being targeted as a problem area. Wendi Dunlap, director of Global Agency Partnerships at Oracle Marketing Cloud, explained, “Last click attribution is easy; it’s simple.”

However, Dunlap told Dave Vellante (@dvellante) and Jeff Frick (@JeffFrick), cohosts of theCUBE, from the SiliconANGLE Media team, during BigDataNYC 2016 that this linear approach doesn’t reflect reality; a consumer’s path to purchase is often circular or crisscrossed.

“When you start talking about multi-click attribution and marketing mixed-modelling, the conversation becomes a lot more nuanced,” she said.

Pieces of the purchase puzzle

Dunlap said that different factions of marketing and advertising are now lacing together to provide a more complete picture of how consumers arrive at a given destination.

“The lines are blurring between consultancies and agencies,” she said, adding that they each bring crucial bits of information to the table — particularly agencies. “They’re the single entity that has that end-to-end consumer view. No other organization has that,”she said.

Psycho data analysis

She explained that a customer relationship management tool, which is built into the Oracle Marketing Cloud, lets businesses see well beyond that often superficial last click.

“When we think about martech [the blending of marketing and technology], that’s really where we’re getting into identity and the fidelity of getting down to that single user ID.”

In other words, let’s stop giving Google all the credit.

Stay tuned for the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/why-should-google-get-all-the-credit-tracing-the-path-to-purchase-with-martech-bigdatanyc/feed/ 0
Taking action on cluttered data lakes | #BigDataNYC http://siliconangle.com/blog/2016/09/29/taking-action-on-cluttered-data-lakes-bigdatanyc/ http://siliconangle.com/blog/2016/09/29/taking-action-on-cluttered-data-lakes-bigdatanyc/#comments Thu, 29 Sep 2016 19:19:39 +0000 http://siliconangle.com/?p=298753 With the development of Big Data, the importance of dat […]]>

With the development of Big Data, the importance of data governance and lineage have grown correspondingly, creating a more pressing need for IT departments to be able to quickly assess assorted aspects of data-flows in and out of their centers.

At this year’s BigDataNYC event, Tony Fisher, SVP of Business Development/Strategy at Zaloni Inc., and Kelly Schupp, VP of Marketing at Zaloni, sat down with Dave Vellante (@dvellante) and Peter Burris (@plburris), cohosts of theCUBE, from the SiliconANGLE Media team, to discuss Big Data and the drive to improve the standard quality of data lakes.

Ease of management

“Management of Big Data and governance are fundamental to what we do at Zaloni,” Fisher stated early on, proceeding from there to outline some of the ways in which Zaloni works to catalog and tidy the data of large storage groupings.

And as the needs for data managing to improve in efficiency, for a variety of reasons, continue to rise, the partitioning off of data groups is becoming less feasible for active enterprises. As mentioned by Schupp, “We’re starting to see more and more that notion of the enterprise data lake, that everyone [in the enterprise] can access and use.”

She continued: “From the very beginning, we were working with … corporations that needed an operations-ready data lake.” She then explained how Zaloni’s developments to support those needs have led it to its current standing in the data management realm.

Cleaning the lakes

Fisher put forth the idea of the data lake as a more managed environment than what Schupp termed “data dumps or data swamps.”

“We’re taking a lot of those concepts that corporations are comfortable with, and applying them to … scale out architecture,” he explained.

To that end, Zaloni’s Data Lake 360° suite of data management, analysis and governance tools are being deployed. “Every aspect of data within the data lake … all of these are concepts that are required to manage the data lake. Managing the data … is maybe not so subtly different,” Fisher said.

Data comfort

And as the tools for handling data make it easier, the task of bringing that usage to customers is becoming less of a hurdle. “If you can let IT realize that they have some control over the data lake and the data that’s in it … [IT] will get a lot more comfortable to say to the customer, ‘Here’s your on-ramp,’” Schupp stated.

According to Fisher, “Data needs to support the needs of the business; the business doesn’t need to have overbearing influence on the data. The problem with data warehouses is that they tend to implode under their own weight and the governance.” He added, “The data lake environment is different because … you really do get to … keep up with it [as it grows]. … Just by nature of the dynamic architecture, it’s going to be more easy and straight-forward.”

And both Fisher and Schupp were optimistic about the continued developments in their field. “I think, as an industry, we’re getting closer to providing that modern data platform that people need,” Schupp concluded.

Stay tuned for the complete video interview, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of BigDataNYC 2016.

Photo by SiliconANGLE
]>
http://siliconangle.com/blog/2016/09/29/taking-action-on-cluttered-data-lakes-bigdatanyc/feed/ 0