Lightspeed Automation with Generative AI

Technically Speaking with Chris Wright
Episode description
Transcript

00:00 — Chris Wright
The current hype for generative AI is palpable. AI has been part of our daily lives for years, from search results to shopping recommendations to discovering new movies or curating your favorite new playlist. But the question of how pervasive AI tools will be for the rest of our lives remains to be seen, and that pervasiveness will be limited by their reliability. After all, what good is cutting-edge AI if it can't perform its intended function accurately? So how do we make AI more reliable and where is the balance between that wow factor and usability?

00:39 — INTRO ANIMATION

00:48 — Chris Wright
The power of language is undeniable. It enables us to communicate complex thoughts, emotions, and ideas with one another. As such, there is immense value in continuing to develop AI models that can accurately understand and interpret human language.

01:07 — Chris Wright
Natural language processing is a subfield of artificial intelligence that builds on computational linguistics. Computer programs perform tasks like tokenization, parts of speech tagging, semantic and sentiment analysis to understand, interpret, and generate human-readable language. Humans can figure out what a sentence or phrase is trying to say, even if the words or grammar aren't exactly correct.

AI algorithms on the other hand, have a hard time with this kind of ambiguity. They depend on statistical patterns and rules-based systems to try and understand language. This makes it difficult for AI to correctly interpret and generate human language, but machine language is a different story with strict rules and syntax that make it easier from computers to process and understand. Ansible YAML is a highly structured human-readable language used for configuration management and infrastructure as code, making it easier for an AI language model to generate Ansible playbooks.

To dive into this topic, let's chat with an expert, Dr. Ruchir Puri, who's chief scientist of IBM Research and was previously the CTO of Watson and now leads innovation with AI and code across IBM. Hey Ruchir, how you doing?

02:30 — Dr. Ruchir Puri
Hey Chris. I'm doing well. Great to be with you here.

02:34 — Chris Wright
Yeah, thanks for chatting with me. I'm pretty interested, I mean the whole world's really excited about the potential of natural language processing, large language models. You've been working on this for quite some time and leveraging foundation models and generative AI in the project that we actually demonstrated in late 2022 together called Project Wisdom. And I know it's aimed at something a little different than maybe what most are familiar with when they think of generative AI and large language models. But I'm curious what's the motivation for you to start with Ansible as this content creation tool?

03:20 — Dr. Ruchir Puri
Our perspective into this is domain we specialize in, which is the domain world really cares about as well. Enterprises in particular is on information technology. And we focus on platforms. To scale things out, you need to have platforms that are scaled out as well. Obviously, if we look at sort of the broader portfolio of platforms in the world, Kubernetes, which is really incorporated as part of really our Red Hat OpenShift platform is one.

Red Hat Ansible is another where there's a large Ansible community out there of almost half a million developers, and that's another platform that we work on. Our mainframe platform is another platform we work on, and so on. And the reason we started with Ansible is in particular, the scale of it, the usage of it is in almost every enterprise in one shape, form or other.

And this is how lot of enterprises are managing, configuring, operating and delivering their applications on their IT estate, which is Ansible platform overall and a enterprise supported version with Ansible Automation Platform. And for us, it was really about enterprises in some ways struggling to scale things because they are on a digital journey. If anything last several years have taught us is that acceleration of digital journey. And the only way to accelerate it is the skills are not growing just by leaps and bounds that people don't appear out of nowhere. They are struggling to find the right skills. I would really say the power of generative AI, which we incorporated as part of Project Wisdom, which is code to Project Wisdom, is about addressing that skills gap. Be a trusted partner of a developer who is trying to operate, deliver IT, and the applications that run on it to be more efficient, to be more productive.

05:41 — Chris Wright
So when we take the work we started together in Project Wisdom and kind of bring it forward into really a commercial offering with Ansible Lightspeed powered by IBM's code assistant, your vision there is, well, let's help the world automate and we can sort of automate the automation. One of the things I think is interesting in this context is, well, when you work directly with the community, you were working with Ansible Playbook writers who are deep domain experts.

And so there's something unique about taking that domain expertise. It's a general purpose language model, taking domain expertise, training it to something specific, and even doing it with the community, it seems to really improve our trust of the overall system. What is your view there?

06:35 — Dr. Ruchir Puri
I would say Chris, to me, the most exciting part has been working with the community because the feedback you get is unfiltered. It's not filtered by any Kool-Aid, it is like a direct pipe actually. You get what you are going to feel actually. And I love it because that helps us improve. Honest feedback is more valuable than a very rosy feedback, I would say. And I think it has helped us improve tremendously. And for us, the Watson Code Assistant was trained with the trusted content that from the Ansible, really domain that is governed, that is, we stand behind it, we as a Red Hat team, stand behind it as well. And it brings the power of certain language, in this case YAML within the context of Ansible to the Watson Code Assistant to make it ... no, I said it in many other talks as well.

I think of these language models as jack of all trades and master of none. It brings the mastery of the subject to that domain, which is what enterprises need to trust the capability and be more productive.

08:01 — Chris Wright
Definitely, a convincingly crafted piece of YAML that does absolutely nothing useful is a waste of everybody's time, which you do see sometimes with large language models, the general purpose ones. So having this specificity, accuracy, trusted content that you're using to train the model with seems really, really important. What about being able to recognize outputs from inputs, do you have a way to do any kind of attribution or connecting what you trained to what you're putting out?

08:40 — Dr. Ruchir Puri
Because this is such an important point because something that has been sort of really ... I would say has been on a lot of people's mind is people who have generated this content, who are the experts at it, whether they were visual artists or whether they were journalists or whether they were developers, that content is out there, it becomes not just a ethical and a responsible thing to do, which should always come first, but also a sort of a efficient thing to do as well.

Because imagine this scenario, it's never Ruchir alone writing the code. Never actually, it's Ruchir, Chris and a hundred other people working on a product. And to me, that capability is so differentiating for Ansible Lightspeed and Watson Code Assistant. And you will see more and more of it even now. That is one of our two differentiations because first, we took it from a responsible and ethical point of view to get the trust. But I believe it is one of the most effective things for enterprise software teams, which are large to work together and very practical as well.

09:49 — Chris Wright
The way you're describing it, I'm really starting to picture a team of collection of humans and some virtual team members, a code assistant, a couple code assistants. And when we combine the focus initially on Ansible expanding to other languages, the abilities to do discovery, you can really see how the future of development is going to be complimented by, powered by, supported by AI and machine learning models. For sure this has been a great conversation. Thank you so much.

10:26 — Dr. Ruchir Puri
Oh, thank you, Chris, for having me. I think we are on a wonderful journey together and looking forward to benefiting the community and community benefiting the enterprises as well. Thank you.

10:38 — Chris Wright
As our next wave of generative AI tools look to simplify the path to information technology, they can't replace the critical thinking and creativity that humans bring to the table. The community's expertise and impact in developing and evolving the models and their direct feedback and support is essential for success. And as adoption grows, so will quality and broader usability. Ansible Automation is just the starting point and what we learn here will shape the future of machine-augmented human intelligence.

20_TS_Lightspeed.html Displaying 20_TS_Lightspeed.html.
  • Keywords:
  • AI/ML,
  • Automation

Meet the guest

 Ruchir Puri

Ruchir Puri

Chief Scientist
IBM Research

Keep exploring

Red Hat OpenShift Data Science

Data scientists and developers can rapidly develop, train, test, and iterate ML/DL models with full support, allowing them to focus on their modeling and application development without waiting for infrastructure provisioning.

Read the brief

How can developers and data scientists collaborate?

Here are the top five things you need to know when working with data scientists and building AI-driven intelligent applications.

Read the checklist

More like this

Technically Speaking with Chris Wright

Machine Learning Model Drift & MLOps Pipelines

Like houseplants, machine learning models require some attention to thrive. That's where MLOps and ML pipelines come in.

Code Comments

Bringing Deep Learning to Enterprise Applications

Building a physical robot isn’t cheap. Luckily, simulation software is reducing the scrap heap—and bringing down the costs of building robots.

Compiler

How Bad Is Betting Wrong On The Future?

We speak to experts in the DevOps space about betting wrong on the future, how development projects go awry, and what teams can do to get things back on track.

Share our shows

We are working hard to bring you new stories, ideas, and insights. Reach out to us on social media, use our show hashtags, and follow us for updates and announcements.

Presented by Red Hat

Sharing knowledge has defined Red Hat from the beginning–ever since co-founder Marc Ewing became known as “the helpful guy in the red hat.” Head over to the Red Hat Blog for expert insights and epic stories from the world of enterprise tech.