Service Provider Cloud

AWS Speeds Up Machine Learning & Deepens Natural Language Recognition

Amazon Web Services is making it easier for developers to train their machine learning algorithms more quickly, and also introducing tools to improve natural language recognition.

As part of its usual fusillade of announcements at its AWS Summit New York developer conference Tuesday, Amazon introduced upgrades to its SageMaker machine learning engine. The new tools are designed to allow developers to get test data to SageMaker faster to speed up training of machine learning algorithms.

Previously, developers could stream data from Amazon S3 cloud storage to train SageMaker's built-in algorithms, eliminating the need for developers to transport hundreds of terabytes of test data to SageMaker. AWS launched SageMaker Streaming Algorithms, which allows developers to stream data to the developers own, custom algorithms. Streaming can reduce training time by up to 90%, improving efficiency and reducing cost, because AWS services are time-metered.

AWS's Matt Wood
AWS's Matt Wood

Now entering its fifth year, the 2020 Vision Executive Summit is an exclusive meeting of global CSP executives focused on navigating the disruptive forces at work in telecom today. Join us in Lisbon on December 4-6 to meet with fellow experts as we define the future of next-gen communications and how to make it profitable.

Also, SageMaker Batch Transform allows cloud operators to transform data dumps in batches -- for example, using billing, inventory or product sales data received daily, rather than in real-time.

"We see a dramatic decrease in time for customers to take their deep training models, train them up, and put them into production," Matt Wood, GM of deep learning and AI, said at the conference keynote Tuesday morning.

AWS launched SageMaker in November as a means of simplifying machine learning. (See Amazon Brings Machine Learning to 'Everyday Developers'.)

Additionally, the cloud provider added new natural language capabilities to its Amazon Lex service for building conversational bots. AWS added Channel Synthesis, to allow its Transcribe service to work on multi-channel audio. That makes Transcribe more useful in call center operations, where audio is typically split into individual tracks for callers and operators. Transcribe operates on each channel separately and then uses timestamps to synthesize compiled transcripts, which enterprises can use for sentiment analysis -- determining whether customers calling in are unhappy, and precisely what they're unhappy about. Enterprises can also use transcripts for compliance, to determine whether agents are using the correct phrasing when interacting with customers.

Amazon Translate also gets support for new languages – namely Japanese, Russian, Italian, traditional Chinese, Turkish and Czech, with a dozen more to come – and new Comprehend Syntax Identification features to improve fine-grained text analysis by labeling parts of speech.

Also on Tuesday, AWS introduced new compute capabilities for its "Snowball" on-premises private cloud device and EC2 public cloud compute instances. (See AWS Boosts 'Snowball' Edge Device & EC2.)

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit my blog Follow me on Facebook Executive Editor, Light Reading

Be the first to post a comment regarding this story.
Sign In