• Neatprompts
  • Posts
  • 🌐 How to Use Height AI to Get More Done in Less Time

🌐 How to Use Height AI to Get More Done in Less Time

The AI Project Manager

Welcome back to Prompts Daily.

We’re back with a Sunday Deep Dive.

Today, I want to show you a way to use AI to automate and streamline your tasks — so you no longer have to spend hours managing your projects and can instead focus on the work that matters most — the work itself.

So, sit back, relax, and enjoy this deep dive on Height, the world’s first and only AI project manager — it makes collaboration and wrapping up projects smooth from start to finish.

We’ll cover these 5 use cases that will take your productivity to the next level:

  • Automate team standups

  • Draft release notes

  • Summarize your conversations

  • Track feedback as tasks

  • Prevent duplicate tasks

I’ve been using Height’s Copilot AI features for a few weeks now, and I'm amazed at how much I'm able to get done — it doesn’t just make it easier to manage projects but also cuts down on the time I spend doing repetitive tasks each week.

Today, I’ll show how you can use this free tool to run your business, bring teams together to collaborate and accomplish your goals with less effort.

PS. I have an exclusive interview with the team for you. It’s at the end - stay tuned!

Let’s get into it 👇

1. Automate team standups

Copilot automatically generates standup reports for my team.

It combs through our task activity and creates a summary of what everyone has been working on.

This report is delivered to me daily, so I can always stay up-to-date on my team's progress — and each person’s update is its own chat bubble, so everyone can react to each others’ work like you would in a typical standup.

I no longer have to worry about running or preparing for standups each week.

Instead, I can just focus on my work, knowing that I'm always in the loop.

And for my team, no updates slip through the cracks — even small tasks and contributions are included in the AI-generated standups, which saves them headaches and time each week, too.

2. Draft release notes

This one is really a game-changer.

Writing release notes is historically
well, tedious. It’s no fun repeating the same process manually every single week.

Normally, I have to carefully review my task list to ensure that I haven’t missed any of our latest updates, and then spend hours trying to write clear and concise descriptions.

But this new feature now does all of the hard work for me.

Right in my Height workspace, Copilot analyzes my tasks and generates a draft of release notes in easy-to-understand language.

All I have to do is tweak the text to make sure it fits my brand voice and send it out to my customers.

With just a few clicks, I have accurate, organized, and beautifully crafted summaries of my project updates.

Copilot intelligently categorizes all of the updates into "New Features," "Improvements," and "Bug Fixes,” so it’s easy for our users to understand the latest updates our team’s been working on (and how it’ll impact them).

3. Summarize conversations

I used to dread clicking into a task or project and seeing dozens of new chat messages.

I would have to scroll back up and read every single one to get full context.

It was time-consuming and frustrating.

But now, with Copilot Catch-Ups, I can effortlessly catch up on key points and progress. With one click, Copilot generates a custom summary of chat messages and key decisions.

This makes it so much faster for me to jump in and start taking action, whether I’ve been away from the office or just jumping in to help with a project that’s already in progress.

My favorite part is that you can choose whether to summarize all of the chat messages on a task or just summarize a specific section of messages.

4. Track feedback as tasks

As I receive feedback and requests in chat, AI does the heavy lifting of parsing text into subtasks.

I can click on any chat message and have Copilot suggest and create subtasks for me.

This way, no feedback or action items slip through the cracks, and everyone stays accountable — all without me or my team spending time actually building those additional tasks into our workflow ourselves.

5. Prevent duplicate tasks

One of the headaches with project management is creating a new task only to realize that it already existed, waiting to be checked off, for ages.

Especially when you’re working with multiple collaborators and lists, the same core ask can show up multiple times.

Now, avoiding duplicate tasks is easy. Whenever I’m creating a new task, Copilot scans my workspace for similar tasks and suggests potential duplicates (later, we’ll find out exactly how and why this process works!).

This feature sounds simple, but it’s really effective.

Before I started using Height, it was impossible for me to know whether I or my team had already created a specific task, leading to a cluttered workspace
but no more!

These are just a few of the ways Height is working to infuse AI throughout every part of your project management workflow.

Each AI feature is built to solve real pain points and make project management more efficient.

I spoke to Sebastien Villar, Head of Engineering at Height, to find out how the team has made decisions about tools and technologies and innovated to become the first true AI project manager — including where they started and where they’re heading next.

Let’s dive in 👇

What was your goal with implementing AI features, and how did you choose the right tools and technologies to support that goal?

At Height, our goal is to help people drive their projects from start to finish by offloading all the manual and tedious parts of project management to AI. Our first step was identifying customers’ biggest current project management pain points in their day-to-day, so we could alleviate those frustrations with AI.

The natural place to start was by weaving AI into our chat feature which lives inside every task. Chat is how team members communicate and collaborate on projects, so it felt like a natural evolution to include AI features right inside the chat.

When it came time to choose our tools, we decided to leverage GPT over fine-tuning other LLM models as we knew that AI would be fast-changing and wanted something that would evolve without the need for us to retrain our own models. OpenAI’s continual advancements helped us ensure that as GPT improved, Height’s AI features could evolve, too. Because GPT is developing fast, it makes it easy for our smaller development team to work quickly and ship new features as those capabilities arise.

Historically, using GPT for things like questions-and-answers or step-by-step guides isn’t too complicated. You just provide concise instructions and receive a specific text response. But when you’re venturing into more advanced features, the complexity escalates. Our team had a few challenges to navigate: we had to meticulously formulate instructions and prompts, balance context within token limits, manage formatted data, and support direct user streaming, all while maintaining reliability — a challenge on its own since GPT’s responses could easily deviate from our expectations.

Then, OpenAI introduced support for function calls in GPT-4, which created more reliable formatted outputs and explicit train-of-thought support. That’s when we decided to address all of those challenges we’d been facing by creating an internal library: GPTRunner.

Why did you create GPTRunner, and how does it solve some of the challenges your team faced in building new AI features?

For background, GPTRunner helps you create AI features by harnessing GPT. Because we utilized GPT for the majority of our features, we needed a way to address those initial challenges I mentioned, like crafting intricate instructions and balancing context within token limits.

There are four main ways GPTRunner helps us address those challenges: reusable context and instructions, reliable output, streaming, and dynamic token limits.

Reusable Context & Instructions: When interacting with GPT, we typically need to provide three types of information: system instructions (general guidance on tone and response approach), contextual data (such as task descriptions and messages), and user queries (both free-form and specialized). To ensure consistency and reusability, our library employs configurable Context objects. These objects gather relevant information from various sources and format it for GPT. All this data is then appended to the system prompt when triggering the GPT request.

Reliable Output: Ensuring GPT provides consistent responses is a common challenge. To address this, we enforce a specific output format. We describe an output function that GPT must call and validate against the response. This validation, using Typescript and zod, zod-to-json-schema libraries ensures that GPT adheres to expected formats. If the response deviates, we guide GPT to fix it for us. By feeding GPT’s incorrect response back, along with an explanation of what didn’t pass validation and a request to adhere to the specification, we can guide it to correct its response.

Streaming: LLM models generate responses token by token, while we exclusively deal with formatted data. To efficiently handle this, we developed a method for parsing partial JSON and validating it against our function specifications. We use the jsonrepair tool to create a valid JSON object, enabling us to respond directly to user queries within the chat. This approach is less safe than parsing a complete response. We don’t know for sure that GPT’s reply will be completely correct once streaming has finished, but we make sure to only start streaming to the user if the content we receive at least follows a subset of the format that we specified. In case GPT ends with an invalid response, we rely on good UX to surface this to the user.

Dynamic Token Limit: LLMs have token limitations, with varying limits based on the model used (e.g., 8k or 32k tokens). To balance flexibility and cost-effectiveness, our library dynamically selects the appropriate model based on the tokens sent. We employ js-tiktoken to count tokens, factoring in OpenAI's tokenization methods and additional tokens inserted by OpenAI. We also handle errors resulting from exceeding token limits, ensuring resilience in case of miscalculation.

Were there other layers or tools you needed to implement, outside of GPT and GPTRunner? How did you choose those and why?

Beyond what we were doing with GPT and GPTRunner, we knew we wanted to build some features that required embeddings. Mainly, we used embeddings to build our duplicate tasks feature, which flags tasks that look like potential duplicates to keep your workspace uncluttered. Embeddings serve as vector representations of text meanings, and though they are fundamental components of LLMs, they can also be employed directly, especially for similarity search. That’s how we started detecting similar tasks in workspaces: by indexing task information into embedding vectors.

First, we needed to choose descriptive, but not overly detailed, task information for indexing. We had to be selective, because indexing too much information makes the similarity score less relevant — in other words, indexing too many characteristics means no tasks will be similar enough to show up. But we needed enough information to be indexed that GPT would be able to recognize similar tasks and help users prevent duplicates. For those reasons, we opted to index task names and descriptions together for more comprehensive task representation. Because these elements change relatively infrequently, we also were able to minimize the need for frequent reindexing.

Next, we had to choose the right embedding technology. We selected the ada model by OpenAI for our embedding needs. At the time, OpenAI was the market leader, and we didn’t yet have the benchmarks to know which model would best suit our use-case goals.

Finally, our team needed to research storage and query infrastructures, which started with evaluating multiple options. Ultimately, we went with Pinecone because its infrastructure is optimized for rapid vector calculations. It also has a user-friendly API and reasonable pricing, both of which are important to our team.

What’s coming next for Height and AI?

Though our team has built and shipped new AI features quickly over the last several months, this is just the beginning. We’re already working on new ways to integrate AI more deeply into our core workflow experience. The way we’ve approached AI is different than any other project management tool, and we plan to continue that way — as we start building Height 2.0, I’m particularly excited about “plugin-based train-of-thought,” where GPT can autonomously determine the information it needs to respond to user queries and perform actions on their behalf.

GPTRunner also now accommodates plugin definitions, each with its own function that GPT can call to gather information or execute actions. The runner’s role is to match specific function calls with plugins, and relay the responses to the model.

Our next step is to implement this new method into our product — it’s going to pave the way for more intricate features and put the power of GPT directly in our users’ hands. Since user queries are often unpredictable, and including all workspace information in the context is impractical, these plugins will offer a more dynamic solution for figuring out the right actions without preplanning.

If you have any questions or ideas about our approach, I’d love to hear them ([email protected]).

If you enjoyed this, sign up for Height 2.0 now to stay in the loop on what’s coming next.

Thanks to Height for partnering with us today in sharing the future of AI-driven productivity with the world!

That’s all, team!

If you enjoyed this deep dive, share it with your friends. Click here to share it on Twitter.

Which deep dive do you want to read next? Reply and let me know!

That’s it for now :)

Stay curious, leaders!

- Mr. Prompts

Enjoyed the newsletter? Please forward it to a friend. It only takes 3 seconds. Writing this took 10 hours. They can sign up here.

If you’d like to become a sponsor, apply here.

How did you like today's email?

Login or Subscribe to participate in polls.

Reply

or to participate.