ProdSec Decoded

Candid conversations with the brightest minds in product security and AI.

5: OpenSource and its Impact on ProdSec - Interview with David Nalley

In this video, we interview David Nalley (AWS Director of Developer Experience and Open Source Strategy + Former President of the Apache Software Foundation) about 'OpenSource and its Impact on Product Security in the world increasingly powered by AI'. https:...

Creators and Guests

Chiradeep Vittal - podcast host
Chiradeep Vittal
Host
Pratik Roychowdhury - podcast host
Pratik Roychowdhury
Host
David Nalley - guest
Director of Developer Experience and Open Source Strategy, AWS

David Nalley is Director of Developer Experience and Open Source Strategy at AWS, bringing over two decades of experience in open source, infrastructure, and security. He previously served as President of the Apache Software Foundation, one of the world's largest open source foundations. At AWS, he leads initiatives to enhance developer productivity and strengthen the security of open source software supply chains. A recognized thought leader in the intersection of open source and security, David frequently speaks at major conferences worldwide about software supply chain security, open source governance, and the evolving landscape of cloud-native development.

Show Notes

In this video, we interview David Nalley (AWS Director of Developer Experience and Open Source Strategy + Former President of the Apache Software Foundation) about 'OpenSource and its Impact on Product Security in the world increasingly powered by AI'.

https://www.linkedin.com/in/davidnalley/

Discussion Topics:
1. Evolution of Open Source & David’s career journey from sysadmin to open source leader.
2. Dependency Risk & Governance Gaps: Incidents like Log4Shell underscore the critical failures in dependency governance — fast remediation is not enough when vulnerable versions are still widely downloaded.
3. Maintainer Blind Spots & Burnout: Open source maintainers often have no visibility into how their code is used — from cloud infra to medical devices.
4. Security in the AI Supply Chain: Developers frequently import random, low-maintenance libraries sourced from GitHub.
5. AI & the Developer Lifecycle: The rise of “vibe coding” (prompt-driven development) accelerates software creation but demands greater upfront clarity on goals and architecture.
6. Code Review is the New Bottleneck: With AI increasing volume, projects must scale up review mechanisms—not just code gen.
7. Open Source vs “Open” AI: The term “open source” is misused in the LLM world—many projects share weights but not data, training methods, or full model transparency.
8. Licensing Complexity: Existing licenses were built for source code, not for models, weights, or datasets. There’s growing consensus that new definitions and licenses (beyond Apache 2.0, GPL) are needed for open models.
9. Wrap up & Advice to Security & Engineering Leaders

Contacts:
Chiradeep Vittal : https://www.linkedin.com/in/chiradeepvittal/
Pratik Roychowdhury: https://www.linkedin.com/in/proychowdhury/

Introduction and Guest Overview
David Nalley's Journey into Open Source
Challenges in Open Source Security
AI's Impact on Software Development
Open Source in the AI Era
Concluding Thoughts and Advice

Episode Transcript

Interview with David Nalley

[00:00:00]

Chiradeep Vittal: Welcome to Project Decoded, the podcast where we explore the evolving world of

product security and ai. I’m Chiradeep Vittal.

Pratik Roychowdhury: And I am Pratik Roychowdhury. We are your hosts for the show today.

Chiradeep Vittal: Today’s guest is a true visionary in the open source world and is someone I’ve had the pleasure of working with in the past, David Nally. Until recently, he served as the president of the Apache Software Foundation, where he represented ASF at critical venues, including a White House Virtual Summit on open source software security, a Senate, Homeland Security and Governmental Affairs Committee, and at the United Nations OSPOs for Good.

He currently leads open source strategy at AWS and also serves on the board of directors for the Internet security research group.

Pratik Roychowdhury: In this episode, David shares his perspectives on open source and its security in the AI era. From dependencies to governance and maintainer burnout to how AI generated code [00:01:00] challenges, longstanding assumptions about trust and code review. We also dive into the messy definition of open source. L L M world and why new licensing models might be needed. David’s insights are a must listen for anyone building in today’s fast moving AI enabled open source software landscape. Let’s dive in. I.

Chiradeep Vittal: David, welcome to the show. It’s a pleasure to have you here with us today. David, we worked together at Citrix incubating the Apache Cloud Stack Project maybe 10, 12 years ago. At that time our engineering team was pretty clueless about what it takes to build an, open source community. And yeah, you guided us through very expertly and very patiently and in the end the process was quite smooth and Apache CloudStack today is still a thriving community. You had an incredible journey Since then. You’ve been at the hel of open source strategy at AWS and the Apache Software Foundation. Can you share [00:02:00] what originally drew you into open source and how that journey has evolved with the rise of cloud and ai?

David Nalley: I can, I, I do want to call out. It feels like it’s been about 15 years. Since we did that. I, I, I, I was actually thinking about that yesterday. That you know, cloud.com and then the cloud stack acquisition and, And, all happened in 2010 and 2011. So a, a little bit more than 10 years ago. But that means I’m just getting older.

So so, you know. I got into technology as a sys admin and one of the things that I discovered about the particular geography that I was living in was that I thought I would struggle to have access to really fun problems to work on. And I’ve been introduced to a number [00:03:00] of. Open source tools, everything from operating systems like Linux and BSD to to actual applications.

And then of course, small little tools that just made my day-to-day life a little easier as a sys admin and. Some of those represented interesting problems, either interesting in terms of the impact that they would have or interesting from an engineering perspective because it required you to think think about how to, to solve really fascinating technical problems in, in unique and novel ways.

And so for me, open source opened up a world of. Problems and puzzles that I got to work on, that I, I didn’t imagine that I would have based upon where I was geographically in the world. And so I started out I. My early contributions were to sys admin tools are to tools that I found [00:04:00] interesting.

So one of the first that I worked on was a project called Open Groupware which as you might imagine was a open source groupware solution. And I, I worked on some other, open source projects as well. But that quickly led me into a community called The Fedora Project. And I contributed to the Fedora Project for, for many years doing everything from writing documentation to packaging software to serving on their, the Fedora Project board.

And so, but open source originally was a way for me to go find interesting problems to work on and also to, to solve some of the problems that I was facing as a sys admin,

Chiradeep Vittal: And do you think your role or your perception has changed with the rise of cloud and ai?

David Nalley: I don’t, I don’t think so. I think I. I think there are still lots of interesting problems to solve. I think some of those problems have gotten more [00:05:00] complex. You know, when, when I started out as a sys admin, one of the things I was looking at was how do I automate the provisioning of a server? And for the most part, that’s a solved problem now.

And so we, but the problems that we’re trying to solve have moved to how do I. how do I observe what’s actually going on in an application that’s running at scale on ephemeral instances across the world are how do I troubleshoot our improved performance on this incredibly distributed application?

And so I don’t think, I don’t think the problems have gone away. I, I think they’ve changed in nature a little bit. But at, at the end of the day, we’re still focused on solving problems.

Chiradeep Vittal: Yeah. And with speaking of that complexity, I think it’s just the nature, like you said, distributed systems, et cetera, has just increase. And AI brings a, particularly new twist to the kinds of problems we are seeing. And I [00:06:00] think, as it turns out, these LLMs are still. Banking on, open source projects either to, generate code or, even to run on the infrastructure.

So how do you see this current state of security governance in the open source components used by AI systems?

David Nalley: So you know, I, I would say that most Of I would say that most of the industry. Still struggles with good security governance, particularly of dependency. Do, do you remember you’re, you’re a Java developer, so do you remember Log for Shell from a couple years ago? Yeah. so Log for Shell, which was. A vulnerability that, that kind of rock the world and, so one of the things that came out during Log for Shell, this, this ironically, is the only thing I’ve ever said that has been noteworthy enough to garner me any kind of mainstream press [00:07:00] attention. But when we were looking at log for Shell, incredibly visible campaign to try and fix the problem the, the updates to the log4j software were made and published to patch that within about 10 days.

And so. Really fast remediation of the, the security problem in the software itself. But months after the fact 30% of all downloads of log4j from Maven Central were for a vulnerable version of Log for Shell. And so I think, I, I think the industry at large has. Neglected a lot of the governance around.

Curating what actually comes into their code bases of dependency and then keeping up with it so that they understand what the current state of that software is. Whether [00:08:00] that is, that it has vulnerabilities in the software that have been patched and need to be updated or, in some cases the software’s been abandoned and nobody’s paying attention to it.

And I think both of those are places that, that the industry has struggled with in terms of in terms of keeping track of it. You know, I’ve, I’ve worked for a number of different software companies in my career and one of them I, I remember distinctly having a conversation with the CEO of that company and he was convinced that, that their software contained 5% or less of open source. Maybe they got up to 10%, but when we would go and look at some of the flagship products, they would have 850 open source dependencies or. 1200 open source dependencies. And, [00:09:00] and that’s a really, really difficult set of dependencies to have to manage for because, you know, you’re, you’re talking about a lot more complexity than the software that you’re actually writing in terms of managing that, that kind of attack surface.

Chiradeep Vittal: Yeah, absolutely. Do you think on the supply side the open source community that’s developing the software, quite often it is to, scratch their own itch suddenly it becomes very popular and they may not have thought that, hey, it’s gonna be used, in the world’s most critical infrastructure, do you think there’s a opportunity or a chance to improve the tooling and the scrutiny that goes into open source? especially in this ability of AI to, scan code and check for vulnerabilities.

David Nalley: Yeah, I, I think it’s, I think you’re right. So first the people who are building. Who are building a lot of open source software, have no visibility into how [00:10:00] it’s consumed or how it’s used. One of the shocking things that came out from Log for Shell was that the software was being used in lots of hardware devices that I had never even considered.

One of them was phlebotomy machines and refrigerators. And, and so like I had never considered that a phlebotomy machine might need a Java logging framework. Maybe I, maybe I should have had more foresight into that. But. A lot of the open source software is meant to solve a problem in a point of time, and because unlike commercial software there is, there’s no contractual relationship.

You don’t have to go ask anyone for permission, and many times the open source project doesn’t even control the distribution point. People consume that software from a Linux distribution or [00:11:00] from a package repository like Maven Central or NPM and the, the actual authors of the software have no idea who’s consuming it or why.

I I also think that software developers in general are, are pretty inventive in repurposing things. And using them in novel ways. And so finding a piece of software that’ll save you some time and solve some problems is, is useful and, and in many ways is, is a panacea for software development because it allows us to create things so much faster.

So all of that to say, I, I completely agree with the problem statement that you’ve, you’ve got this you’ve got people creating open source software and they don’t know how it’s used, and that means that it’s really hard for. them to, to completely understand the attack [00:12:00] surface that some of these libraries are, are being exposed to.

I, I think that I think that the first challenge really is curation. And I think that there’s, i, I think that the software industry as a whole has done a poor job of that. And so if I were looking at solving the, the first kind of sets of problems that would be to keep me out of trouble, don’t let me choose dependencies that aren’t well maintained, that don’t have good hygiene.

I, I’m also amazed by the number of dependencies that I see consumed by software developers that. aren’t really intended for widespread use. You know, again, it was somebody who was scratching an itch and they released something on GitHub and it solved somebody’s problem and then someone else noticed it.

And adoption took off. And, so, you know, if, if I were [00:13:00] looking at, at how AI could, could help me make better choices from a dependency perspective, it, it would be to save me from the random surfing of GitHub to try and solve my problems with libraries of unknown quality.

Chiradeep Vittal: That’s yeah, I certainly hope that comes about and, and as someone, leading developer experience, I know that’s a new or recent job change for you. Have you noticed any shifts on how developers build with AI tools and does that impact security at all?

David Nalley: Well, I, I mean, I, I have lots of fear around vibe coding because that, that actually feels a little more like YOLO mode. And so I, I, I’ve got some concerns around that. I, I think, I, I absolutely think the. Way that developers create software is rapidly changing. I don’t know that we’re in an endstate now, but when I think about, [00:14:00] when I think about things like you know, you and I both work on CloudStack, for instance, when I think about how CloudStack was, was created, developed, and evolved over time, I think that if you were looking at the macro level today, you would say AI’s not really changing anything. It’s, it’s speeding things up, but it’s not changing it. But I, I think it’s insidious because I, I do think that there is an evolution happening in how developers create software. And I think one of the things that one of the things that developers are going to have.

To get much better at is to spend more time thinking about what they’re actually trying to achieve and how they should go about solving the problem. Because AI speeds things up and it makes it really easy to go try things and, and come out with something at least quasi useful. [00:15:00] Pretty easily. Like I’m, I’m watching some folks participate in some hackathons right now, and they’re doing single prompt game generation and they’re creating you know, text-based games or things like pong with effectively one really well written prompt.

And that’s amazing. And I’m, I, I admire the fact that, that the technology has been able to evolve to the point where that is possible. Because when I started, when I started in my early adventure writing software, I was never very good at it. But especially in this time period, you know, I learned to program by copying samples out of a hardback book in basic, and that’s how I learned.

That’s how I learned programming was basically what you would call the hard way of, of copying and, and learning from my mistakes as I was effectively transcribing a hardback book into [00:16:00] into a computer. And, the, the, so the speed has definitely increased. I think that’s a big change, but I think that especially as we get past novel solutions that people are creating in a, a prompt or two, I think that it’s going to require folks to spend a lot more time upfront thinking about.

What they’re actually trying to accomplish and how to accomplish it, and to think about how the tools change their, their lifecycle. And I’m seeing a lot of a lot of these tools being quite innovative, especially as we get into an agentic timeframe where it’s not just chat as, as your primary interface.

You know, you, you’re having, you’re having agents with with a specialized lens on the task that they’re creating. And so you may have multiple things going on in parallel. And I, I think that changes the way that we have to conceive of, and work with [00:17:00] with tools when we’re building software because you know, it, it, it used to be, it used to be almost a little transactional when we thought, okay, I’m going to go build this feature. Let’s think about how this feature should look. And you know, you might spend weeks. Working on a feature. And, and now you can have a, a number of agents that are doing things like writing documentation as the software’s being written and writing tests and and all working in parallel.

And I think that that’s something that we’re not quite ready to, to deal with because that effectively turns the developer into not just the developer, but also the manager of all of the other teams making them more of a quarterback. I, I think it’s, I think it does. I’m, I’m curious what your view is, you know, you’ve, you’ve got a, you’ve got a different lens than me.

What’s your view on [00:18:00] how, how AI is changing the software development lifecycle?

Chiradeep Vittal: Yeah, I’ve certainly used a single prompt to feature myself. In fact, recently I was trying to use an open source component written and TypeScript, and I’m not very good at TypeScript at all, so I. I wanted a new feature, so I submitted a pull request and in full disclosure, largely written by ai. And I wrote the test and everything else.

And it is remarkable the pushback I got, even though I didn’t say it was ai. The AI did not understand the original developer’s intent, like how to structure and, lay out the code. And so we got quite a bit of back and forth pushback. thought that was good because you don’t want people submitting YOLO code into open source either.

But what do you think of that? Let’s say, somebody came up to Apache CloudStack and said. here’s 10,000 lines of YOLO [00:19:00] code is a new hypervisor I’m supporting. Should they take it or what should their reaction be,

David Nalley: I, I think, and I’ve, I’ve had this conversation with a number of open source projects. The Constraint isn’t now, and I, I frankly, I don’t think it ever has been the writing of code. The, the actual constraint on development in open source has always been people’s ability to review it for quality and security and.

So you might be able to generate a 10,000 or a hundred thousand line patch set, but the real constraint is can somebody review that? And you see a lot of open source projects today who want really small, almost atomic commits to come in so that things can be reviewed and, and I think that’s a good practice.

I think one of the things we’re going to to end up having to do, because [00:20:00] I think there’s a lot of code that is being written by AI of varying qualities, and, and that’s not a reflection on the tools or the open source projects, but that’s a, that’s really a reflection of the people who are wielding the tools and whether they’re, they’re spending time to, to vet it.

But I think we’re going to have to build better reviewing. Tools like Code Generation is not the end state that we should be searching after. I think it’s going to be the analysis and and vetting of code that’s coming in from from a security lens or from a software quality lens. And I, I think that’s missing.

If, if I were telling a if, if I were talking to a project today, I’d say, Hey. People are going to be using these tools you’re, you’re not going to stop people from using the tools because they, it does speed them up. The real question is how are you going to deal with huge influx of contributions?

Because if [00:21:00] people see a problem and now they have a tool that will speed up code generation you’re gonna be getting a lot of it. How do you keep up with it? What’s your strategy for that? And I, I don’t think that. Open source projects should lower the quality bar. If anything, we should be leveraging pools to, to increase the, the quality bar.

But it, it’s, it’s very much a it’s very much a concern. And we’re, I’m, I’m worried about increasing maintainer burnout. From these open source projects because you know, I, I’m putting, when I’m playing with some of these tools I’m putting together applications and, and that would’ve taken me weeks in, in the past.

I’m, I’m able to do that in an afternoon. And you know, that’s, that is speeding up one portion, but that means that the constraint. Remains reviewing that code. And I, I think that’s a real challenge for open source projects is I don’t think [00:22:00] there’s a great I don’t, I, I see lots of people celebrating that they can create code.

I’m not seeing lots of tools that celebrate that. They can review code really well.

Chiradeep Vittal: That’s absolutely echo my feeling as well.

Excuse me. Open source comm organizations Apache and Linux looking into providing or investing in these kind of review tools which help the community?

David Nalley: I, I think there’s some, I think there’s definitely some interest and, you know I think I. You know so when I was running the Apache Software Foundation, when I was the president there we were definitely paying attention to what does continuous integration and continuous delivery look like in, in this new environment?

And there’s a little bit of concern that for open source projects, that effectively becomes an unfunded [00:23:00] mandate because now we’ve got you know, if I’m euphemistically saying the machines are now creating lots of code to be reviewed. And So, now we need to spin up lots of testing and evaluation, which is equally expensive.

If not more so. And, and so we essentially have a situation where the machines are churning out tons of code and there is suddenly the need to have machines reviewing lots of the code. And how do you, how do you manage that expense? Because most open source projects are not. Well-funded in, in the commercial sense.

And they may be at a place where they may be at a well-respected foundation, whether that’s Eclipse or the Linux Foundation or, Apache, but, that doesn’t mean that there’s the same kind of funding that you would expect, and so I, I think that you would expect of a commercial project and, [00:24:00] and it’s, you know, it’s a little scary, even pre ai, just how much that continuous testing, the, the continuously running tests every time a a pull request came in, would, would spin costs up. So even if you’re just talking about maintaining the status quo of testing, your testing costs have skyrocketed because now people are creating more change sets, which means more testing has to run. I’m not aware of anybody right now that.

Has actual production AI review. I see lots of people paying attention and looking for where it’s headed though. But I think we’re still early days in that.

Pratik Roychowdhury: David, maybe keeping on the theme of ai, but switching gears slightly into LLMs and the open source, the definition of open source as well as maybe some of the licensing things. The word open source in the LLM world is. Thrown [00:25:00] around a lot. There is people just maybe open up their APIs and call themselves open source.

Sometimes they just put open weights and call themselves open source training data. Nobody really opens up. But model architectures maybe some of them open up. So maybe I would like to get your thoughts on this concept of open weights versus open source and maybe just to hear your perspectives on whether.

In the LLM world, does open source the definition of open source? Does it need to change or things are fine as they are and people should call themselves open weights as opposed to open source.

David Nalley: I, I think there’s, I think there’s a lot of hype around the idea of open source ai and you know, I, I. AI systems, which I’m going to use as a, as a really large generalization for LLMs and, and things like LLMs maybe. They’re a lot more complex than [00:26:00] software. Software is is relatively simple.

There is, there’s source.

code, which. Ends up being compiled into object code. And, those are really your two two modalities for software. Like you, you have, you have source code and you, have you have binary executables or and so it, it’s, it’s relatively simple to think about. I.

Licensing from, from that. And, and I also think that it’s important to, to think back, you know, when we created open source licensing, it was to, I say we, when the open source community created open source licensing because that, that does predate me a little bit. It it was, it was to work around constraints introduced by copyright.

And the [00:27:00] focus was on letting people help each other. As a matter of fact, if you look at the four freedoms, they explicitly call out. The purpose is being able to share and help your neighbor. I. And and so I, I think when you look at open source from that perspective, that that is that’s pretty telling.

But I also think that, that there’s this idea of being able to. Study and learn from what’s been built. And you’ll see that in things like the four freedoms that access to source code is a prerequisite for for some of the freedoms. And so I, I think that’s, I think that is a useful first framing.

A a lot of what I hear described is open source. Actually doesn’t really resemble open source software to me. There’s, there’s restrictions on field of use or restrictions [00:28:00] on classes of people or entities that can make use of it. And that doesn’t feel very open source to me. And I think I think we’re in a situation where open source has been really successful as a development methodology and.

Effectively open source software has become a public good and a public good that, you know, we take for granted. And so in the rush to make these systems, these AI systems available to people, I think that folks. Want to share it broadly. They want to help other people, including their neighbors, and let them take advantage of it.

And so I think there’s I think there’s some confusion and, and that, but also AI systems, like I said a moment ago, there, there are a lot more complex, right? You, you start with data for, in the case of an LLM, you start with data in transformers and you end up on the other [00:29:00] side with, with weight and models and it, it’s not quite analogous directly to to the simplicity of I’ve got I’ve got source code, which I will use a compiler to build into object code, and.

Even that’s pretty deterministic. And moving from deterministic to probabilistic is means that, you know, you might, you might not have the same outcome when you run the same things through through a system that builds models. And so I think I think it’s complex. I know that there’s been a lot of, there’s been a lot of additional complexity because of

figuring out what elements you need to qualify something as open source. And so there’s, there’s certainly been work on definitions of open source [00:30:00] AI out of the OSI, the open source basically the stewards of the open source definition. So the open source initiative has maintained the open source software definition for, for decades now, and, and they’ve created an initial version of the open source AI definition.

The Linux Foundation has, has created a similar one, and, and I actually liked what what Steve at Red Monk said, which is there probably can’t be an open source LLM or an open source AI because what we’re used to, especially if you’re, if you’re deeply involved in the, the semantics of open source what you’re used to in terms of that, you can’t easily replicate.

With these probabilistic systems. And so I think that there’s actually, i, I think there’s actually some gap. That doesn’t mean that the term open source won’t be overloaded and it won’t be used. But I’m, I’m not sure [00:31:00] that it’s, it’s directly applicable. And I think part of that is going to come down to just how much can you share about how they were built.

How much can you learn about how they were built? How much can you then go in and change and, and evolve things? And I think I don’t think that we will end up with LLMs being the final stage of development and, and the final output in terms of models and weights being available and, and having freely redistributable APIs or even freely redistributable models.

I think, i, I think the, the topic is so complex that there’s probably a new way of thinking that we’re going to have to get to around models themselves.

Pratik Roychowdhury: And you did touch upon the licensing concept and maybe double clicking on that a little bit more.

David Nalley: Yeah. Yep.

Pratik Roychowdhury: Apache 2.0 GPL and other licenses, so maybe want to get your perspective on how should. Open [00:32:00] source foundations. Think about licensing. Is there a new licensing paradigm that should come about for these AI systems, or will existing licensing can be used for these?

David Nalley: It’s a, it’s a good question. I. I think frankly, most open source software licenses were already straining because they were explicitly written for source code. And so you have things like documentation and I, I think most of the. Open source software licenses, did a poor job of acting as a license even for documentation.

Now, not an attorney don’t even get to to play one on the internet, but you know, when, when your software license has things like an explicit patent grant, how does that work for documentation? And so yes, I.

[00:33:00] think I, I think you’ll end up seeing new and interesting licenses. As a matter of fact, I’ve, there’s been a couple announced recently that are explicitly licenses around are designed to be for the model.

I, I actually think. The Linux Foundation just released one in the past couple of months. And so I think that I do think that we will start seeing some new and interesting licenses for models that are going to be model specific. And in part that’s because the software licenses were written really for source code and, and don’t do a great job when you try and change modalities.

Chiradeep Vittal: And how do you see the we discussed, the increasing load on maintainers from AI generated code. What changes And and one of course, as security practitioners, we worry about the security of this contributed code [00:34:00] as well. And so what do you, from your perspective, what changes would you like to see in this ecosystem of of open source to improve the reliability, the security and the availability of this of open source.

David Nalley: I first, I want to call out that I think it changes the dynamic. Open source and it, it probably another places as well. But you, you talk about this, with your TypeScript contribution, right? Where you understood, or you, the, your LLM did not understand the original author’s intent and I think that a lot of open source.

Today and historically has been built around individuals working towards common goals and understanding that almost implicitly. And that’s one of the things that I worry about changing is [00:35:00] now. Instead of instead of having a shared common understanding in a community, you have, you have tools that are effectively making assumptions based upon the inputs to them.

And when, when an open source.

project has, has a thriving community. There’s a high degree of trust between participants, especially among the maintainers. And I, I’m not sure that that translates well into AI generated things. And I think that’s one challenge that we’re going to have to figure out how to deal with is how do we convey purpose, intent.

Direction so that everyone using lots of different tools is working in the same direction. And maybe that’s part of the change in the software development lifecycle is that [00:36:00] we have to be more explicit about that rather than, than. Assuming that we and all of our tools understand that direction.

So I think that’s probably one of the things I think that needs to change is I think we need to recognize that, that those things that we could get away with them being implicit because we’d have conversations with them on IRC or conversations about them on IRC or on a mailing list or on Slack.

That we’re going to have to be very explicit so that the tools start understanding what’s expected and what the direction and intent is. I, I do think we’re going to have to figure out how do we, how do we improve review how do we make how do we scale up reviews and then. Is it something that we trust or is it just another step in the continuous integration testing platform?

That we still have to do lots [00:37:00] of manual verification and I think that’s something that’s gonna be decided on a project by project basis and probably changes over time as tools improve. If you told me three years ago that I would have a tool that would write an application and I. I would only briefly peruse what it created.

I, I, I would find that unfathomable. And I think maybe in three years time we, we might see similar shifts in review. I.

Pratik Roychowdhury: David, as we wrap up, just want to get your thoughts on, or any kind of advice, any parting thoughts that you have for. Leaders who are trying to build in this complex world of vibe coding and AI and all the other things including open source and et cetera. So any parting thoughts that you have for them?

David Nalley: Yeah, I, I, I think my sense is that the fundamentals haven’t changed. They’ve only sped up. So a lot of this is, you know, still a [00:38:00] focus on curation of what you’re taking as a dependency. Making sure that you have systems in place to help you understand what’s going on in all of those dependencies.

Like you took those dependencies because they sped you up. But that also means that you have to have systems in place to be able to monitor what’s going on with your dependency set. And then. I think I would say you, you cannot ignore ai. I think there’s a lot of folks who are who are almost avoiding the conversation because you know, it, it’s different than what they’re used to seeing.

And I, I think that’s. A bad take. I think the question is how are you going to use these tools to increase your quality and speed up things? Because the, the change is coming and it, it really reminds me of some of the changes that we’ve seen. I think it’s it’s moving faster, [00:39:00] but the move from functional to object oriented languages, the move from text editors to actual IDs. And you know, the, the actual adoption of IDs reminds me of, of this in a lot of ways because it would do a lot of boilerplate for you and, and. People said it wasn’t a real programming tool, it was a toy. There was a lot of resistance and, and you definitely still had to remain in Camp vi or Cap Camp emacs when the early IDEs were coming out.

But now I. I’m not sure. I can imagine not having an IDE available when writing code. And I, I think we’re on an even faster timeline than IDEs in terms of transforming how developers work. But I don’t think that those are, are settled yet. I think we’re still very early days in just how different that’s going to, to, [00:40:00] to look. And then finally I would say, you’ve gotta be spending time with these tools. Looking at, at everything that’s coming. The, the space is moving so fast that it’s, it’s easy to get left behind and, you know, missing six months in this industry is crazy, you know, because world’s changing so fast around us.

Chiradeep Vittal: David. Thanks for all that insights. And All right, folks. There you have it. Another fantastic episode on open source security and developer experience in the world of ai. Thanks again to David for sharing his insights. It’s been an absolute pleasure to speaking with you.

Pratik Roychowdhury: The.

David Nalley: I, I really appreciate you reaching out and inviting me. It’s been a lot of fun and I, I appreciate the chance to get to catch up with you again. We should not wait as long next time for us to, to chat.

Chiradeep Vittal: Absolutely.

Pratik Roychowdhury: you, David.

Chiradeep Vittal: Thank you. Thank you to all our listeners for joining us on project Decoded. you found value in today’s conversations, please subscribe, leave a review and share with your colleagues. back soon with another deep dive into the [00:41:00] evolving world of product security and ai. Until next time, stay secure.