Hi Dan, we wanted to have this session basically to talk a little bit about Keysight, your journey into Keysight and get a little bit of information on that.
So maybe let’s start by introducing yourself to those who don’t know you like I do over the number of years. Talk a little bit about your journey and what led you to join Keysight?
Sure. Well, so thanks for having me, Dave. I appreciate it. You know, probably the relevant parts of that journey are just my first foray into the test automation market space. This goes back to roughly 2006 where I joined the company called Numerics, where we built test automation solutions for package applications. Very specifically PeopleSoft, SAP, Oracle and couple of change management technologies.
I got immersed, as a CEO of that company, in this market space and gained, I think, a lot of insight in terms of what tools were available, what customers’ requirements were and how you marry those two items together.
My relationship with you Dave started when we were both in the sales organization for WorkSoft, which was a very SAP focused entity. We managed to collaborate on driving that organization in a direction that made it, at the time, what I believe in my humble opinion, the world’s leading SAP test automation solution.
And so, through sort of various stints as either a sort of global CIO or CEO in this market space, I ended up evaluating and assessing the current state of the industry and trying to understand just what companies were really required to remain in line with the changes in software development.
The necessary changes from a software quality assurance perspective that came out of that and who was not only taking advantage of new technologies and building those into their platform, but also creating sort of new methodologies for testing.
And so, in the course of that evaluation I looked at like both nascent sort of new entrance into the market and some of the more established players. What I found was that Eggplant software, which had recently been acquired by Keysight, had, from my perspective, the best vision in terms of how you bring all that together and coupled with the acquisition by Keysight, which gave eggplant access to substantial financial resources to actually make that vision a reality.
I made the decision to join to help go shape that vision and propel the company to be the sort of dominant player in the software quality assurance market space.
SQALogic is thrilled to be part of that ecosystem right now, joining the partner program and representing that technology for our customers. Like you, I agree, Dan. It’s interesting to see the evolutions that has happened in the last, I’d say two years in the industry. I’ve been in it for 25 years and there’s been a sort of status quo on the way testing was approached. Traditional recording replay or create a framework, code and script it and get it done, maintain it.
I think that there is a lot of exciting new things that are coming to the market. I think Keysight, is a company that has a tremendous vision in that area and have led the charge if you want around certain things that I want us to talk a little bit more about.
You know, the notions of, for example, artificial intelligence and incorporating that evolving sort of capacity of auto learn, auto repair, things of that nature which I find very compelling and exciting for business users to save time and money and achieve goals. So why don’t you talk to us a little bit about Keysight Eggplant’s philosophy and vision around that.
Yeah. First, I would maybe like to touch on something you mentioned, which is our partnership. I think this is critically important.
Technology is obviously fantastic and leading-edge technology that delivers extraordinary value is what vendors like Keysight or Eggplant are all about. But the facts are that the implementation and operationalization of that technology is what delivers value and I’m a very firm believer that the marriage of this leading technology with leading providers like SQALogic who really understand the market, understand customer challenges, who understand how to implement and make this technology an intrinsic part of an operation, is where sort of the magic happens. The confluence of the of those two partnerships. You know, we’ll talk about technologies but understanding that without the means to intelligently deliver that technology in a way that SQALogic does, it often becomes shelfware. I think it’s really, really important point.
So back to the technology, if you think about modern development, you know most new development is based on web technology. It introduces a number of different challenges from a testing perspective. You know we’ve gotten away from over the course of the last 5 to 7 years from just traditional application lifecycle management where you’re testing production applications and the changes that happen there to encompass, Dev OPS or CI CD testing, in the actual development process. It presents a new set of challenges particularly as those underlying technologies for development and engineering evolve.
The use of artificial intelligence, machine learning, those capabilities incorporated into the test automation processes is a huge opportunity to be able to gain deep insight and understanding into user journeys, application behaviors, to be able to map existing test cases to those user journeys and determine where there are gaps, where there’s overlap to help guide where you fill in those gaps. We’re starting to see autonomous test generation, where AI can evaluate an application, determine where you have coverage, where you don’t have coverage, and that honestly generate certain test case.
You can also use it for one of the classic problems in testing, which is maintenance of those test cases to understand when a selector is changed, or a X path has changed, and automatically update your test automation scripts or assets. So, use of AI is really changing, I think the face of testing and providing a lot of new capabilities. Similarly, I think RPA is maybe a little more in its infancy, but if you talk to analysts, they often say RPA is the fastest growing segment of enterprise software market. You see revenues increasing year over year of like 63% and its technology that can be used to automate workflows.
What do we mean by that? You know, there’s in testing, a lot of repetitive activities that don’t require a high cognitive effort, right? So, they’re not well served by human beings, these low cognitive efforts are ideally suited for RPA where you can do those repetitive tasks using software versus human beings and replace the expense and the error prone sort of nature of people. We’re already starting to see where it’s replacing: regression testing, performance testing, some load testing and enabling QA professionals. Then you sort of channel their efforts on more mentally stimulating activities like exploratory testing, usability, some ad hoc testing. I think the combination of AI and RPA will dramatically change for the nature of testing for modern applications.
Absolutely and it’s interesting because the arrival of these technologies that are more robust and capable of adapting to the changes is what drives RPA success. If automation is as you know, we would say flaky, flaky tests, and things that just break and are difficult and fragile to maintain then there’s simply no ability to potentially think of using those for production level activities with the instabilities they possess. Using AI and using RPA technologies which I find are very compelling as well, and I hope we have the opportunity to dive deeper into these on another session.
But one of the things that eggplant does phenomenally well is the ability to test anything by basically interacting with an abstraction layer in the middle of the application instead of looking at all those little nitty gritty objects that are part of the application. When you tie into the object layer of the application, you’re dependent on support for all those versions and specifics that can change with the patch and so forth that break everything.
Eggplants ability to interact with what it sees the same way a tester does is extremely interesting, and I think there’s a whole other topic we can go down on that path that you know to talk about, and we’ll probably do that in another session.
I find that AI is driving maturity into traditional testing. We’re getting more eyes on quality. Quality is now owned by a broader audience. It’s no longer just the test team that must validate things before it goes in and kick it into production. In production, data is being leveraged to feedback into testing as far as selecting what needs to be tested and not. Developers are much more owners of quality of the code that they bring out and that whole cycle of thing is very interesting to see.
And then, like you said, that drives towards this ability of robot process automation kicking off, which has always been a need, right? There’s a lot of redundancy and tedious tasks in IT that are not difficult to do, but the risk of error is there, so therefore people were forced to do it. In fact, testing is one of those funny industries where manual testing, for example, or just “read the spreadsheet and repeat the tasks” and things like that becomes tiresome.
Being able to automate those elements, remove that and allow your test experts that what you really want from them is their mindset of how to find the bug, there’s seekers of the problem and identifiers of that. Remove those tedious tasks and you allow them to allocate more bandwidth and expertise to the meaningful testing that needs to be done in the applications, which I believe ultimately greatly improve quality.
The other thing is, think from the business perspective. Software rules the world. I mean every company now is a software company. And you know, often competitive advantages are how quickly can you develop software and release software of high quality. Speed and quality are essential there. And so, the demands from a business perspective are quite different than they were before. Yet some of the same challenges we faced 20 years ago together, Dave, still remain.
They’re bridging the gap between technical QA users and the business users that have to validate applications that will understand the underlying applications still exists, and these technologies help continue to bridge those gaps and enable nontechnical business users to participate in this process in a meaningful way that ultimately means higher quality in a faster pace than with traditional resources.
You’re absolutely right, could not have said it better. You know, we’ve seen too many times where QA is perceived as a necessary evil, just something that needs to be done and there’s a cost to it and people push forward with it. I think that trend is changing where people are seeing the value that properly done quality assurance and testing can derive to the application to the end user experience.
And part of that goes with measurability and accountability as well of the tests. You can’t just do testing for testing. It’s about testing the right things and having that ability to determine the appropriate path through the application and have better coverage.
Well, our time is coming to an end, I don’t want this to go on forever although we could easily continue the discussion for hours. The goal here was a quick intorduction and set the stage. I definitely feel that there’s a couple of very interesting topics that we can dive into a little bit more. We’ll have to talk about that you and I, but if you’re open to the idea, I would welcome the opportunity of a couple of follow up sessions where we might bring in some of the technical experts and dive into some of that. I think there would be a lot of interest in something like that.
Yeah, I’d love to. I think that there are a number of these different areas that we can dive, you know, sort of do a much deeper dive into and provide a lot of information on what we’re doing, why we’re doing it, how it’s done and ultimately what the value is to the organizations that consume these kinds of software products and services.