As a companion at Concept Ventures, a VC agency constructed round deep know-how and market analysis, I spend my days swimming in data: tutorial papers, market experiences, interview notes, and written analyses. Our job is to synthesize these knowledge factors right into a nuanced perspective to tell our funding choices.
Studying the hype on-line, it’s tempting to assume you possibly can simply delegate something to AI. However for one thing so essential to our job, we don’t simply need it accomplished, we’d like it to be glorious. How a lot can AI actually do for us?
On this piece, I’ll share:
- How we construction directions to get one of the best evaluation out of an AI mannequin
- The place I critically intervene and rely alone ideas
- How one can get an AI to reflect the best way you write
When counting on an LLM you typically get one thing that solely appears good at first look: typically the AI has missed particulars, or an essential nuance. And for this core a part of my job, respectable isn’t sufficient—I want output to be glorious.
This AI accuracy hole creates a painful cycle the place you spin in circles, making an attempt to re-prompt the system to get what you need till you’re primarily left rewriting all the output by yourself. In the long run, it’s unclear if AI really helped in any respect. The simpler strategy is knowing the way you (the human) do the considering and go away writing (i.e., formatting and synthesis) to the LLM. This straightforward separation is what elevates AI-augmented workflows from respectable to distinctive.
Right here’s an instance of how we construct these sorts of workflows at Concept Ventures, and how one can too. We’ll illustrate an instance with the automation of our inner market analysis experiences.
Step 1: Outline the considering course of
Put together a doc with very detailed directions on the underlying evaluation/building you search to realize—clearly define the context & targets, then dive into all the particulars on the way you deconstruct a broad evaluation: the particular questions you’d ask, follow-up sub-questions, how they need to be answered with knowledge, and key callouts or exceptions.
You should utilize an AI assistant that will help you generate a primary draft of this, sharing accomplished paperwork and asking it to deconstruct the evaluation. However these directions are essential, so it’s essential to complete writing it by hand and proceed to replace it over time while you tweak your evaluation.
Instance evaluation directions included within the immediate (observe: the complete directions will usually be 2 to 10 or extra pages lengthy)
- Analyze the underlying market construction: Is it fragmented or consolidated? Why? (e.g., excessive specialization wants, regulatory boundaries, community results, legacy tech debt). How is fragmentation altering over time, and is it totally different throughout market segments?
- Use the next knowledge sources and analyses: . . .
- Consider key market dynamics: What are the everyday switching prices? How prevalent is tech debt? What are the everyday gross sales cycles and purchaser behaviors? How do incumbents preserve their place (moats)?
- Use the next knowledge sources and analyses: . . .
Step 2: Lay out your human-led evaluation
Present your main evaluation, together with uncooked notes and directions to the AI. We set our programs up so that they require the consumer to supply their key takeaways and evaluation to information the system in the direction of what’s most essential—highlighting areas to deal with, key alternatives, and potential issues. These are usually 4 to 5 detailed bullet factors of two to 4 sentences every. That is the crux of the evaluation and may due to this fact by no means be AI-generated.
Instance key takeaways offered to the system:
- This market has traditionally been small and fragmented with out main software program suppliers. We count on it’ll develop dramatically, primarily by at the moment automating labor spend and consolidating a set of level options. The underlying demand for this functionality may even improve with XYZ challenges. We really feel very assured in these two development levers.
- There’s substantial focus on the higher finish of the market. Main platforms management round X% of the market and have all invested closely in their very own know-how. However under the top-n largest gamers, there’s a wholesome cohort of medium-large patrons which have the dimensions to wish this answer however don’t need to construct it. We expect that is adequate to construct a sizeable firm, though market focus and construct versus purchase stays a key long-term threat.
Step 3: Run an interactive Q&A to hone the evaluation
This dialogue is essentially the most fascinating and enjoyable step: Have the system generate inquiries to make clear the contours of your evaluation. Based mostly on the first evaluation, together with the notes and basic directions, the system asks questions on issues that both weren’t clear or had conflicting data/directions. This helps sharpen the evaluation and provides the consumer the chance to share extra of their thought course of and steerage.
Instance Q&A:
- Q from the AI: You stated that main platforms have invested closely on this know-how, however conversations with a few of these corporations indicated an pleasure to purchase. Do you assume that will probably be widespread, or have been they exceptions?
- A from the human: Good level. I do assume that lots of them will purchase ultimately, however as a result of they’ve constructed lots of know-how internally they’re extra prone to want a brand new platform just for sure parts, versus shopping for an end-to-end system. And the very largest corporations (high three to 5) will construct every part in-house.
Step 4: Share previous work to match tone, not concepts
Use earlier examples of your work to copy tone and elegance solely after the scaffolding work is finished. Most individuals skip instantly to this step, however we discovered (and analysis exhibits) that offering completed examples is most helpful simply to match tone and writing fashion, versus shaping the evaluation itself.
In researching one of the best AI-native merchandise, we’ve seen that virtually all the work goes into defining the considering and evaluation portion of the issue—detailed directions, tips, orchestration, and tooling—so the AI system is aware of what it ought to do and simply executes on it.
At Concept Ventures, we’ve began to reflect the identical system by growing highly-constrained, human-in-the-loop workflows that direct the evaluation, leaving the LLM to execute primary data extraction and synthesis. That’s how we—and our AI programs—have began working smarter. Not by asking AI to assume for us, however by serving to it assume higher.

