search slide
search slide
pages bottom
Currently Browsing: Results for Tag "tensorflow"

IBM, Nvidia release new PowerAI software suite for deep learning projects

IBM, Nvidia release new PowerAI software suite for deep learning projects At some point this year the big buzzword appears to have shifted away from “the cloud” to “deep learning.In similar fashion, everybody who’s anybody is setting machine learning loose on problems — especially the much-beloved process of deep learning.

IBM is bringing machine learning to a mainframe near you

IBM is bringing machine learning to a mainframe near you Modern machine learning — often just referred to as “AI” — has largely been the province of specially trained computer scientists who traffic in exotic modeling frameworks and graduate level math.As powerful as machine learning frameworks like Google’s TensorFlow are, they still require a lot of specialized knowledge to create models.

Google gives everyone machine learning superpowers with TensorFlow 1.0

Google gives everyone machine learning superpowers with TensorFlow 1.0 That began to change with the release of a number of open-source machine learning frameworks like Theano, Spark ML, Microsoft’s CNTK, and Google’s TensorFlow.With this week’s release of TensorFlow 1.

Google's dedicated TensorFlow processor, or TPU, crushes Intel, Nvidia in inference workloads

Google's dedicated TensorFlow processor, or TPU, crushes Intel, Nvidia in inference workloads Several years ago, Google began working on its own custom software for machine learning and artificial intelligence workloads, dubbed TensorFlow.Now, Google has released some performance data for their TPU and how it compares to Intel’s Haswell CPUs and Nvidia’s K80 (Kepler-based) data center dual GPU.

Nvidia claims Pascal GPUs would challenge Google's TensorFlow TPU in updated benchmarks

Nvidia claims Pascal GPUs would challenge Google's TensorFlow TPU in updated benchmarks Here’s Nvidia: Nvidia’s claim that the TPU has 13x the performance of K80 is provisionally true, but there’s a snag.The net result of these improvements, according to Nvidia, is that the P40 offers 26x more inference performance than one die of a K80.