Cracking the enterprise code: Everything founders need to know but were afraid to ask about enterprise orgs... Register Now x
Article | Read in 1 minute

Navigating trade offs in your LLM tech stack

AI luminaries Carlos Alzate, Laurence Moroney, and Bill MacCartney, offer insights into what you need to consider when choosing an LLM for your application.

Key takeaways include:
– Open-source vs commercial models
– Data approaches: RAG vs fine-tuning vs prompt engineering
– How to evaluate one model over another
– Using AI agents
– Building technical defensibility into your stack



Carlos Alzate
CTO at AI Fund

Carlos is CTO at AI Fund, where he assists portfolio companies develop their ML technology and counsels them on the current state of the art. His research work has taken him around the world, including Ireland where led a research group at IBM Research and postdoctoral research in Belgium.

Laurence Moroney
AI Advocacy Lead at Google & Fellow at AI Fund

Laurence leads AI Advocacy at Google, working with the Google AI Research and product development teams to expand access to AI/ML for everyone. He’s also a best-selling author and has written dozens of programming books as well as several acclaimed works of fiction. He is also a Fellow at AI Fund, where he advises founders on their ML tech.

Bill MacCartney
Venture Advisor at AI Fund

Bill is an advisor and investor with 20 years of experience in the AI space. He is currently a Venture Advisor at AI Fund. Previously, he served as VP of Machine Learning at Cohere, where he led a team of 50 machine learning scientists and engineers in developing large language models (LLMs). He also led an ML team at Apple and was a senior research scientist at Google.


Moderator: Bola Adegbulu
Principal Builder at AI Fund

Originally an aerospace engineer turned repeat founder, Bola is part of AI Fund’s company building team where he works with our Founders in Residence and Venture Advisors to research new business ideas and create new ventures.


Dan Landau