Lessons and Takeaways from O’Reilly “Coding With AI” Event

Last week I attended the O’Reilly “Coding With AI – The End of Software Development As We Know It” online event and I have to admit it was a lot more well balanced event in terms of ideas and opinions than I was worrying it would be.

I was worried that it will just end up being more of investor and AI marketer fueled hype, over the top claims of “productivity” and the “death of software engineering”, and chest beating chants of, “you’re a Luddite(though there are fundamental differences between being opposed to automation and being critical and careful about it, but that’s a matter for another post). There was none of that per se (with speakers like Kent Beck, Birgitta Böckeler and Chelsea Troy and other stellar cast, you are in good company), so in this quick post I will share my bullet point takeaways, ideas and lessons from that event.

General Summary and Takeaways

  • AI is good at handling accidental complexity in code and software engineering, humans should still handle the essential complexity. AI tools are good at generating happy paths, scaffolding, boilerplates etc but human engineers have to pay attention to edge cases, design, compatibility, security, quality, problem-fit.
  • AI tools might seem to help experienced and senior engineers more than they do juniors because experienced engineers have a better developed sense of design and edge cases and they might be able to guide AI more methodically than juniors who can be more prone to blindly accepting AI generated code.
  • AI tools seem to work better on greenfield software engineering work than brownfield (for example refactoring a large legacy system or fixing bugs etc). Good prompts can make a big difference up to a point but complex codebases remain a problem for an LLM (or agentic tools) to handle well without hallucinating, for example, an agentic tool might say confidently for the 5th time its fixed the bug you asked for, but there is a good chance it hasn’t.
  • A curse and also a boon of these tools is now that its easier to write new code, a lot more code might be written. Humans must keep control of the design process and review code generated by AI thoroughly to make sure it fits the overall context of the organisation, security, problem domain and usage.
  • Techniques like “vibe coding” or vibe anything, are best used for quick prototyping and POCs that will either eventually be thrown away or refactored to a proper enough design before going to production.
  • We need to be careful and skeptical of claims like “At <company>, x% of all code is now written by AI”. Chances are that code is mundane, predictable, safe and tightly scoped, think auto-completing a for loop, or adding null reference check or implementing a login frontend. Also some of these companies are in the business of selling AI tools. Written != In production
  • Learning the foundational computer science and software engineering skills is still important and going to be for the foreseeable future to deal with the increasing complexity. Don’t expect AI to do the critical thinking for you.
  • Getting good at reviewing code critically and learning how to build context to use AI tools effectively is important. You need to guide AI to the outcome you want so building that skill is important
  • Tech leaders should really think carefully if AI is actually speeding things up and making them better or not. Think about where it fits and where it doesn’t.
  • Be mindful of token consumption cost if using a hosted LLM.
  • Software engineering is not just about writing code, its also about thinking about the product, users, deployment, security, monitoring, support etc, all of that is still important and must be given due attention even in the age of AI.
  • If we accept whatever code AI gives us blindly, thinking that we will use AI to fix it later, AI will also struggle to maintain the bad code it generated because complex or complicated codebases tend to be where these tools fall over significantly.
  • AI tools might provide better results in some programming languages but not in others.
  • 💡When working with agentic coding:
    • Know when to quit
    • Review, review, review
    • Fight complacency
    • Keep sessions small
    • Optimise for fast feedback loop
    • Leverage code quality monitoring tools
  • 💡Prompting techniques
    • Write detailed specifications and custom instructions for agentic tools to follow
    • Provide high level architectural context first
    • Share relevant code snippets.
    • Summarise module’s purpose
    • State the deliverable in one line
    • Write task specific prompts
    • Share the exact error messages whilst debugging
    • Reframe the problem to stop AI going in loops
    • During refactoring give it specific improvement goals (e.g. ✅“refactor this class to improve modularity, cohesion and reduce coupling but keep types internal” ❌”this code is a mess, fix it!”)
  • Engineer productivity claims about the use of AI might be questionable
    • One study defines productivity = number of tasks/PRs completed, successful builds, number of commits (not number of lines of code written, which is almost never a good metric of success)
    • More junior engineers (with least experience) saw more productivity gains than seniors. Seniors were also less likely to accept LLM suggestions than juniors. Note: the study doesn’t clarify if more productive juniors also bridged the learning gaps faster. Thus only looking at output might be ill-advised.
    • The more specific a task (requiring least context, and limited in scope), the higher likelihood of AI to complete it well enough. And vice-versa i.e. more abstract and ill-defined the problem that requires abstract reasoning and critical thinking, the less useful AI is going to be and the productivity gains might be minimal.
    • Human thinking and effort is still needed to break the big problem down to small pieces that AI can execute for you reliably.
    • Since LLMs are trained on existing code, they are likely to reproduce existing logic and might struggle with novel and innovative work.
    • If engineers stop critically thinking and improving the design, AI models will be learning from what AI models produced before which might lower the overall quality of produced software and also the quality of subsequent LLM output.
    • More context might make it less likely for LLMs to retrieve specific information so best to narrow down to specific segment/paragraphs in a document, for example.
    • Engineers will have to sharpen investigative and evaluative skills

There were a bunch of other interesting conversations but the core essence of it all was what I enumerated above. There are a couple of ideas in here that have given me more food for thought and as of this writing I am trying those out, so I will share those experiences as well in due time.

👋

Resources

One Reply to “Lessons and Takeaways from O’Reilly “Coding With AI” Event”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.