Welcome back to Prompts Daily.
Weâre back with a Sunday Deep Dive.
Today, I want to show you a way to use AI to automate and streamline your tasks â so you no longer have to spend hours managing your projects and can instead focus on the work that matters most â the work itself.
So, sit back, relax, and enjoy this deep dive on Height, the worldâs first and only AI project manager â it makes collaboration and wrapping up projects smooth from start to finish.
Weâll cover these 5 use cases that will take your productivity to the next level:
Automate team standups
Draft release notes
Summarize your conversations
Track feedback as tasks
Prevent duplicate tasks
Iâve been using Heightâs Copilot AI features for a few weeks now, and I'm amazed at how much I'm able to get done â it doesnât just make it easier to manage projects but also cuts down on the time I spend doing repetitive tasks each week.
Today, Iâll show how you can use this free tool to run your business, bring teams together to collaborate and accomplish your goals with less effort.
PS. I have an exclusive interview with the team for you. Itâs at the end - stay tuned!
Letâs get into it đ
Copilot automatically generates standup reports for my team.
It combs through our task activity and creates a summary of what everyone has been working on.
This report is delivered to me daily, so I can always stay up-to-date on my team's progress â and each personâs update is its own chat bubble, so everyone can react to each othersâ work like you would in a typical standup.
I no longer have to worry about running or preparing for standups each week.
Instead, I can just focus on my work, knowing that I'm always in the loop.
And for my team, no updates slip through the cracks â even small tasks and contributions are included in the AI-generated standups, which saves them headaches and time each week, too.
This one is really a game-changer.
Writing release notes is historicallyâŠwell, tedious. Itâs no fun repeating the same process manually every single week.
Normally, I have to carefully review my task list to ensure that I havenât missed any of our latest updates, and then spend hours trying to write clear and concise descriptions.
But this new feature now does all of the hard work for me.
Right in my Height workspace, Copilot analyzes my tasks and generates a draft of release notes in easy-to-understand language.
All I have to do is tweak the text to make sure it fits my brand voice and send it out to my customers.
With just a few clicks, I have accurate, organized, and beautifully crafted summaries of my project updates.
Copilot intelligently categorizes all of the updates into "New Features," "Improvements," and "Bug Fixes,â so itâs easy for our users to understand the latest updates our teamâs been working on (and how itâll impact them).
I used to dread clicking into a task or project and seeing dozens of new chat messages.
I would have to scroll back up and read every single one to get full context.
It was time-consuming and frustrating.
But now, with Copilot Catch-Ups, I can effortlessly catch up on key points and progress. With one click, Copilot generates a custom summary of chat messages and key decisions.
This makes it so much faster for me to jump in and start taking action, whether Iâve been away from the office or just jumping in to help with a project thatâs already in progress.
My favorite part is that you can choose whether to summarize all of the chat messages on a task or just summarize a specific section of messages.
As I receive feedback and requests in chat, AI does the heavy lifting of parsing text into subtasks.
I can click on any chat message and have Copilot suggest and create subtasks for me.
This way, no feedback or action items slip through the cracks, and everyone stays accountable â all without me or my team spending time actually building those additional tasks into our workflow ourselves.
One of the headaches with project management is creating a new task only to realize that it already existed, waiting to be checked off, for ages.
Especially when youâre working with multiple collaborators and lists, the same core ask can show up multiple times.
Now, avoiding duplicate tasks is easy. Whenever Iâm creating a new task, Copilot scans my workspace for similar tasks and suggests potential duplicates (later, weâll find out exactly how and why this process works!).
This feature sounds simple, but itâs really effective.
Before I started using Height, it was impossible for me to know whether I or my team had already created a specific task, leading to a cluttered workspaceâŠbut no more!
These are just a few of the ways Height is working to infuse AI throughout every part of your project management workflow.
Each AI feature is built to solve real pain points and make project management more efficient.
I spoke to Sebastien Villar, Head of Engineering at Height, to find out how the team has made decisions about tools and technologies and innovated to become the first true AI project manager â including where they started and where theyâre heading next.
Letâs dive in đ
At Height, our goal is to help people drive their projects from start to finish by offloading all the manual and tedious parts of project management to AI. Our first step was identifying customersâ biggest current project management pain points in their day-to-day, so we could alleviate those frustrations with AI.
The natural place to start was by weaving AI into our chat feature which lives inside every task. Chat is how team members communicate and collaborate on projects, so it felt like a natural evolution to include AI features right inside the chat.
When it came time to choose our tools, we decided to leverage GPT over fine-tuning other LLM models as we knew that AI would be fast-changing and wanted something that would evolve without the need for us to retrain our own models. OpenAIâs continual advancements helped us ensure that as GPT improved, Heightâs AI features could evolve, too. Because GPT is developing fast, it makes it easy for our smaller development team to work quickly and ship new features as those capabilities arise.
Historically, using GPT for things like questions-and-answers or step-by-step guides isnât too complicated. You just provide concise instructions and receive a specific text response. But when youâre venturing into more advanced features, the complexity escalates. Our team had a few challenges to navigate: we had to meticulously formulate instructions and prompts, balance context within token limits, manage formatted data, and support direct user streaming, all while maintaining reliability â a challenge on its own since GPTâs responses could easily deviate from our expectations.
Then, OpenAI introduced support for function calls in GPT-4, which created more reliable formatted outputs and explicit train-of-thought support. Thatâs when we decided to address all of those challenges weâd been facing by creating an internal library: GPTRunner.
For background, GPTRunner helps you create AI features by harnessing GPT. Because we utilized GPT for the majority of our features, we needed a way to address those initial challenges I mentioned, like crafting intricate instructions and balancing context within token limits.
There are four main ways GPTRunner helps us address those challenges: reusable context and instructions, reliable output, streaming, and dynamic token limits.
Reusable Context & Instructions: When interacting with GPT, we typically need to provide three types of information: system instructions (general guidance on tone and response approach), contextual data (such as task descriptions and messages), and user queries (both free-form and specialized). To ensure consistency and reusability, our library employs configurable Context objects. These objects gather relevant information from various sources and format it for GPT. All this data is then appended to the system prompt when triggering the GPT request.
Reliable Output: Ensuring GPT provides consistent responses is a common challenge. To address this, we enforce a specific output format. We describe an output function that GPT must call and validate against the response. This validation, using Typescript and zod, zod-to-json-schema libraries ensures that GPT adheres to expected formats. If the response deviates, we guide GPT to fix it for us. By feeding GPTâs incorrect response back, along with an explanation of what didnât pass validation and a request to adhere to the specification, we can guide it to correct its response.
Streaming: LLM models generate responses token by token, while we exclusively deal with formatted data. To efficiently handle this, we developed a method for parsing partial JSON and validating it against our function specifications. We use the jsonrepair tool to create a valid JSON object, enabling us to respond directly to user queries within the chat. This approach is less safe than parsing a complete response. We donât know for sure that GPTâs reply will be completely correct once streaming has finished, but we make sure to only start streaming to the user if the content we receive at least follows a subset of the format that we specified. In case GPT ends with an invalid response, we rely on good UX to surface this to the user.
Dynamic Token Limit: LLMs have token limitations, with varying limits based on the model used (e.g., 8k or 32k tokens). To balance flexibility and cost-effectiveness, our library dynamically selects the appropriate model based on the tokens sent. We employ js-tiktoken to count tokens, factoring in OpenAI's tokenization methods and additional tokens inserted by OpenAI. We also handle errors resulting from exceeding token limits, ensuring resilience in case of miscalculation.
Beyond what we were doing with GPT and GPTRunner, we knew we wanted to build some features that required embeddings. Mainly, we used embeddings to build our duplicate tasks feature, which flags tasks that look like potential duplicates to keep your workspace uncluttered. Embeddings serve as vector representations of text meanings, and though they are fundamental components of LLMs, they can also be employed directly, especially for similarity search. Thatâs how we started detecting similar tasks in workspaces: by indexing task information into embedding vectors.
First, we needed to choose descriptive, but not overly detailed, task information for indexing. We had to be selective, because indexing too much information makes the similarity score less relevant â in other words, indexing too many characteristics means no tasks will be similar enough to show up. But we needed enough information to be indexed that GPT would be able to recognize similar tasks and help users prevent duplicates. For those reasons, we opted to index task names and descriptions together for more comprehensive task representation. Because these elements change relatively infrequently, we also were able to minimize the need for frequent reindexing.
Next, we had to choose the right embedding technology. We selected the ada model by OpenAI for our embedding needs. At the time, OpenAI was the market leader, and we didnât yet have the benchmarks to know which model would best suit our use-case goals.
Finally, our team needed to research storage and query infrastructures, which started with evaluating multiple options. Ultimately, we went with Pinecone because its infrastructure is optimized for rapid vector calculations. It also has a user-friendly API and reasonable pricing, both of which are important to our team.
Though our team has built and shipped new AI features quickly over the last several months, this is just the beginning. Weâre already working on new ways to integrate AI more deeply into our core workflow experience. The way weâve approached AI is different than any other project management tool, and we plan to continue that way â as we start building Height 2.0, Iâm particularly excited about âplugin-based train-of-thought,â where GPT can autonomously determine the information it needs to respond to user queries and perform actions on their behalf.
GPTRunner also now accommodates plugin definitions, each with its own function that GPT can call to gather information or execute actions. The runnerâs role is to match specific function calls with plugins, and relay the responses to the model.
Our next step is to implement this new method into our product â itâs going to pave the way for more intricate features and put the power of GPT directly in our usersâ hands. Since user queries are often unpredictable, and including all workspace information in the context is impractical, these plugins will offer a more dynamic solution for figuring out the right actions without preplanning.
If you have any questions or ideas about our approach, Iâd love to hear them ([email protected]).
If you enjoyed this, sign up for Height 2.0 now to stay in the loop on whatâs coming next.
Thanks to Height for partnering with us today in sharing the future of AI-driven productivity with the world!
Thatâs all, team!
If you enjoyed this deep dive, share it with your friends. Click here to share it on Twitter.
Which deep dive do you want to read next? Reply and let me know!
Thatâs it for now :)
Stay curious, leaders!
- Mr. Prompts
Enjoyed the newsletter? Please forward it to a friend. It only takes 3 seconds. Writing this took 10 hours. They can sign up here.
If youâd like to become a sponsor, apply here.
How did you like today's email? |
Reply