Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(refactor|test)(frontend): chat #1488

Merged
merged 37 commits into from May 4, 2024
Merged

Conversation

amanape
Copy link
Collaborator

@amanape amanape commented May 1, 2024

Summary

Test and rebuild the chat UI and create a new chat slice to accompany it.

Motivation

The <ChatInterface> component looks to be one of the oldest components that has been built since the initial versions of the UI and business logic. The components and children use and reuse some Redux state values, confusing CSS, and is generally difficult to read through.

Additionally, the chat slice has methods and properties that are not being used at all, such as the separation of assistantMessages and userMessages (which anyways could be obtained from messages). It also mixes simple chat logic with UI-dependent logic (typing effect).

Todo

  • Rebuild and test core components
  • Recreate chat slice
  • Simplify CSS
  • Auto-scroll on new messages
  • Rebuild the useTypingEffect hook
  • Remove old components and state

Thoughts

It may also be worth considering to remove the typing effect altogether. In existing applications built around LLMs, they "stream" the results generating by the LLM. This allows a relatively fast and responsive-feeling compared to outputting the entire generated response, which takes longer.

The way OpenDevin is currently setup, we are giving the illusion of streaming, where we first obtain the fully generated response, then forcefully type it out via the useTypingEffect hook.

!!This actually ends up taking the longest compared to steaming OR outputting full generated response!!

Take this opportunity to switch to this PR's branch, and see for yourself how fast OpenDevin feels without the constraint on typing.

I can't say the extent of work required to allow for streaming from the backend, also given that not all models support it. But I suppose it is worth the discussion.

(We can also drop it entirely for the time being unless we really want to look like we are streaming)

By default, I will complete this draft with the current behavior of the chat and typing effect so it would be better tested, simplified, and readable. If there are other suggestions, I'd be glad to pursue them.

Related issues

#1399 - Markdown is not applied to messages being typed, this is because the chat interface has two different components for rendering messages, one for already existing, and one for messages still being typed. The <Markdown> component was only applied to the former (easy to miss). This PR will attempt to resolve this by only serving one component, typing or not, to render messages.

@amanape amanape marked this pull request as draft May 1, 2024 13:53
@codecov-commenter
Copy link

codecov-commenter commented May 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

❗ No coverage uploaded for pull request base (main@8961cea). Click here to learn what that means.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1488   +/-   ##
=======================================
  Coverage        ?   60.74%           
=======================================
  Files           ?       87           
  Lines           ?     3709           
  Branches        ?        0           
=======================================
  Hits            ?     2253           
  Misses          ?     1456           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@rbren rbren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM! Seems like mostly tests so probably pretty safe 😄

@amanape amanape requested review from rbren and Sparkier May 3, 2024 12:58
Copy link
Collaborator

@Sparkier Sparkier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for implementing all the tests. I personally think the previous typing animation looked better than messages just popping in, though.

Comment on lines 12 to 22
function ActionBanner() {
return (
<div
data-testid="typing"
className="flex items-center justify-center gap-2 bg-neutral-700 border-y border-neutral-500 py-1.5 px-4"
>
<div className="flex h-5 w-5 items-center justify-center" />
<p className="text-sm text-gray-200 dark:text-gray-200">Working...</p>
</div>
);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@openartist can you take a look at this? I like the typing animation better. Now we have the messages popping in with this banner that seems a bit out of place.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chst.mp4

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a design in the figma for a three dot animation appearing in the bubble before text appears. @Sparkier let me know if you have time to review some options. I agree that even if we aren't streaming the AI response the typewriter effect gives a greater sense of responsiveness—perhaps we can just refine it a bit?

@@ -1,79 +0,0 @@
import { useEffect, useState } from "react";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned, think this was a better indicator of work. Could even do something like ... while the agent is thinking.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my alternative thought to the action banner was the .... The banner looked better in my head because I was also imagining a similar banner could be used to indicate what tools the agent is using or process (such as install status) indicators.

I'm not fully convinced of my current implementation either, but it does make up for it in fast responses, which is why I was searching for an alternative to the typing effect in the first place

@amanape
Copy link
Collaborator Author

amanape commented May 3, 2024

Until there is any other better suggestion, I'll go ahead and test/implement typing effect as previous (albeit slower 😔)

@Sparkier
Copy link
Collaborator

Sparkier commented May 3, 2024

Until there is any other better suggestion, I'll go ahead and test/implement typing effect as previous (albeit slower 😔)

Also other people might have different opinions, please chime in! (@rbren @yimothysu @openartist)

@amanape
Copy link
Collaborator Author

amanape commented May 3, 2024

New changes feature both, typing effect + banner (until relevant party members share opinion). Whatever decision, we'll go for it.

useTyping

Improvements

Minor changes

  • When user resumes session, all messages are retyped (for simplicity, FEEDBACK required)

@yimothysu
Copy link
Collaborator

Also other people might have different opinions, please chime in! (@rbren @yimothysu @openartist)

I personally prefer to omit the typing effect because it gives the impression that we're streaming when we're not. IMO a loading animation (such as the one used by Gemini) is sufficient, but either way this looks great!

frontend/src/hooks/useTyping.ts Show resolved Hide resolved
@amanape amanape marked this pull request as ready for review May 4, 2024 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants