Our Team




CEO, Founder

Jeff Adams has been leading prominent speech & language technology research for more than 20 years.  Until 2009, he worked at Nuance / Dragon where he was responsible for developing and improving speech recognition for Nuance’s “Dragon” dictation software. He presided over many of the critical improvements in the 1990s and 2000s that brought this technology into the mainstream and enabled widespread consumer adoption.

After leaving Nuance, Jeff joined Yap, a company specializing in voicemail-to-text transcription. He assembled a strong team of 12 speech scientists who, within two brief years, were able to beat all competitors on an unbiased test set. They also matched the performance of a competitor who used (off-shore) human transcription. 

Yap's success caught the interest of Amazon who wanted to jump-start their new speech & language research lab. Upon acquisition, Jeff led efforts to build one of the industry-leading speech & language groups. His Amazon team developed products such as the Echo, Dash, and Fire TV.  Jeff left Amazon in 2014 to found Cobalt Speech and Language.



Julie has more than 20 years of experience in the software industry, building enterprise platforms and applications, leading teams, and mentoring talented engineers. She began her career at BroadVision, where she helped define and create the first implementation of what would become the Java portlet specification.

Before joining the Cobalt leadership team, she was an Engineering Manager at Quick Base developing a platform that enables non-technical users to build sophisticated custom business applications. Julie loves finding solutions, making the complex seem simple, and creating software that empowers people to accomplish more than they ever thought possible.


VP, Government Technology

Bill has over 25 years of experience applying leading-edge speech technology to real-world applications. At Cobalt, Bill is leading the effort to apply Cobalt’s core technology to Government applications.

Prior to joining Cobalt, Bill held several leadership positions in speech technology companies starting with Entropic, acquired by Microsoft; SpeechInsight, a speech technology consultancy for Fortune 100 companies such a Verizon, Comcast, and Microsoft; Think-A-Move, a speech technology company developing hand-held medical devices for Army combat medics and 1st Responders; and Tradeharbor, a company providing a SAS based Voice Authentication services for the financial services market.

He holds a BS in Electrical Engineering from the University of Virginia and an MS in Computer Science and Electrical Engineering from The George Washington University.


Chief Scientist

 Stan is the Chief Scientist at Cobalt and takes a lead role in the technical approach on Cobalt projects.  Prior to Cobalt, Stan was a key speech scientist at Amazon. He was at Amazon for four years, contributing directly to acoustic modeling (AM) on the Echo as well as to other Amazon speech projects like Fire TV, Dash, and Shopping.  Stan worked three years at Yap, where he was a key contributor on Yap’s voicemail recognition platform.  Stan also worked in machine learning applications at NASA, General Dynamics, and NuTech Solutions.  Stan earned an M.S. in Computer Science at the Florida Institute of Technology.


General Counsel

Scott Earnshaw is a start-up attorney with substantial experience dealing in new technologies. He also has significant in-house legal experience. 

His focus has been on patent licensing, internet and emerging energy technologies. He has worked with sports leagues, financial services and mergers and acquisitions.

Our Latest Posts

Jun 15, 2022
Close up of a woman, face and mouth, with letters floating across the screen. speech synthesis concept
By Rasmus Dall

We’ve previously written about one of our core technologies at Cobalt Speech & Language - automatic speech recognition (ASR). When you speak, the ASR system converts your spoken words into text. Another core technology at Cobalt is text-to-speech (TTS), or speech synthesis, which converts written words into spoken audio.

Jun 3, 2020
3 people talking with each other
By Arif Haque


Many people have used automatic speech recognition systems to transcribe audio to text, but there are a host of other items that it’s useful to identify from a stream of audio. One task in particular is called diarization - who spoke when? Knowing this information can help with a range of downstream applications. For example, in meeting summarization, knowing who said something means you can accurately make notes and allocate action items.