Skip to content

Commit

Permalink
fix build errors
Browse files Browse the repository at this point in the history
  • Loading branch information
RicardoE105 committed May 12, 2024
1 parent 558e168 commit 68e5c53
Show file tree
Hide file tree
Showing 4 changed files with 187 additions and 121 deletions.
221 changes: 132 additions & 89 deletions packages/cli/src/controllers/ai.controller.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,36 +15,20 @@ import { RunnableWithMessageHistory } from '@langchain/core/runnables';
import { z } from 'zod';
import { zodToJsonSchema } from 'zod-to-json-schema';
import { JsonOutputFunctionsParser } from 'langchain/output_parsers';
import { Unauthorized } from 'express-openapi-validator/dist/openapi.validator';
import { AIMessage, HumanMessage } from '@langchain/core/messages';

const memorySessions = new Map<string, ChatMessageHistory>();

const suggestionTodos = z.array(
z.object({
title: z.string(),
description: z.string(),
const errorSuggestionSchema = z.object({
suggestion: z.object({
title: z.string().describe('The title of the suggestion'),
description: z.string().describe('Concise description of the suggestion'),
// followUpQuestion: z.string().describe('The follow-up question to be asked to the user'),
// followUpAction: z.string().describe('The follow-up action to be taken by the user'),
codeSnippet: z.string().optional().describe('The code snippet to be provided to the user'),
}),
);

const errorSuggestionsSchema = z.object({
suggestions: z.array(
z.object({
title: z.string().describe('The title of the suggestion'),
description: z.string().describe('Concise description of the suggestion'),
key: z.string(),
followUpQuestion: z.string().describe('The follow-up question to be asked to the user'),
followUpAction: z.string().describe('The follow-up action to be taken by the user'),
codeSnippet: z.string().optional().describe('The code snippet to be provided to the user'),
userUsingWrongRunMode: z
.boolean()
.optional()
.describe('Whether the user is using the wrong run mode'),
}),
),
});

const stringifyAndTrim = (obj: object) => JSON.stringify(obj).trim();

@RestController('/ai')
export class AIController {
constructor(
Expand Down Expand Up @@ -81,108 +65,167 @@ export class AIController {
async debugChat(req: AIRequest.DebugChat, res: express.Response) {
const { sessionId, text, schemas, nodes, parameters, error } = req.body;

const systemMessage = SystemMessagePromptTemplate.fromTemplate(`
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.N8N_AI_OPENAI_API_KEY,
modelName: 'gpt-4',
streaming: true,
});

You're an assistant n8n expert assistant. Your role is to help users fix issues with coding in the n8n code node.
const modelWithOutputParser = model.bind({
functions: [
{
name: 'output_formatter',
description: 'Should always be used to properly format output',
parameters: zodToJsonSchema(errorSuggestionSchema),
},
],
function_call: { name: 'output_formatter' },
});

Provide two suggestions. Each suggestion should include: title, description and a code snippet.
const outputParser = new JsonOutputFunctionsParser();

If the suggestion is related to a wrong run mode, do not provide a code snippet.
let chatMessageHistory = memorySessions.get(sessionId);

Provide a follow up action responding the follow-up question affirmatively. For example: Yes, I would like to try this solution.
let isFollowUpQuestion = false;

Make sure to end the suggestion with a follow-up question that should be answered by the user. For example: Would you like to try the solution in the code node?
if (!chatMessageHistory) {
chatMessageHistory = new ChatMessageHistory();
memorySessions.set(sessionId, chatMessageHistory);
} else {
isFollowUpQuestion = true;
}

The code node uses $now and $today to work with dates. Both methods are wrapper around the Luxon library
let chainStream;

$now: A Luxon object containing the current timestamp. Equivalent to DateTime.now().
if (!isFollowUpQuestion) {
const systemMessage = SystemMessagePromptTemplate.fromTemplate(`
$today: A Luxon object containing the current timestamp, rounded down to the day.
You're an assistant n8n expert assistant. Your role is to help users fix issues with coding in the n8n code node.
The code node does not allow the use of import or require.
Provide ONE suggestion. The suggestion should include title, description and a code snippet. If the suggestion is related to a wrong run mode, do not provide a code snippet.
The code node does not allow to make http requests or accessing the file system.
The code node uses $now and $today to work with dates. Both methods are wrapper around the Luxon library
There are two modes:
$now: A Luxon object containing the current timestamp. Equivalent to DateTime.now().
Run Once for All Items: this is the default. When your workflow runs, the code in the code node executes once, regardless of how many input items there are. In this mode you can access all input items using "items"
$today: A Luxon object containing the current timestamp, rounded down to the day.
Run Once for Each Item: choose this if you want your code to run for every input item. In this mode you can access each input item using "item"
The code node does not allow the use of import or require.
When mode is Run Once for each item, the code node cannot access the items to reference the input data.
The code node does not allow to make http requests or accessing the file system.
When suggesting fixes to expressions which are referencing other nodes(or input data), carefully check the provided schema, if the node contains the referenced data.
There are two modes:
## Workflow context
Run Once for All Items: this is the default. When your workflow runs, the code in the code node executes once, regardless of how many input items there are. In this mode you can access all input items using "items"
### Workflow nodes:
{nodes}
Run Once for Each Item: choose this if you want your code to run for every input item. In this mode you can access each input item using "item"
### All workflow nodes schemas:
{schemas}
When mode is Run Once for each item, the code node cannot access the items to reference the input data.
### Run mode: {runMode}
When suggesting fixes to expressions which are referencing other nodes(or input data), carefully check the provided schema, if the node contains the referenced data.
### Language: {language}
## Workflow context
### User Provided Code: {code}
### Workflow nodes:
{nodes}
`);
### All workflow nodes schemas:
{schemas}
const systemMessageFormatted = await systemMessage.format({
nodes,
schemas: JSON.stringify(schemas),
runMode: parameters!.runMode,
language: parameters!.language,
code: parameters!.code,
});
### Run mode: {runMode}
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.N8N_AI_OPENAI_API_KEY,
modelName: 'gpt-4',
streaming: true,
});
### Language: {language}
const modelWithOutputParser = model.bind({
functions: [
`);

const systemMessageFormatted = await systemMessage.format({
nodes,
schemas: JSON.stringify(schemas),
runMode: parameters!.mode,
language: parameters!.language,
code: parameters!.jsCode,
});

// messages.inputVariables;

// const messagesToSave = [
// systemMessageFormatted,
// ['human','{question} \n\n Error: {error}'],
// ];

const prompt = ChatPromptTemplate.fromMessages([
systemMessageFormatted,
['human', '{question} \n\n Error: {error}'],
]);

await chatMessageHistory.addMessage(systemMessageFormatted);
await chatMessageHistory.addMessage(
new HumanMessage(
`Please suggest solutions for the error below: \n\n Error: ${JSON.stringify(error)}`,
),
);

const chain = prompt.pipe(modelWithOutputParser).pipe(outputParser);

const chainWithHistory = new RunnableWithMessageHistory({
runnable: chain,
getMessageHistory: async () => chatMessageHistory,
inputMessagesKey: 'question',
historyMessagesKey: 'history',
});

// const humanMessage = await HumanMessagePromptTemplate.fromTemplate(
// `'{question} \n\n Error: {error}`,
// ).format({
// question: text ?? 'Please suggest solutions for the error below',
// error: JSON.stringify(error),
// });

// await chatMessageHistory.addMessage(humanMessage);

chainStream = await chainWithHistory.stream(
{
name: 'output_formatter',
description: 'Should always be used to properly format output',
parameters: zodToJsonSchema(errorSuggestionsSchema),
question: 'Please suggest solutions for the error below',
error: JSON.stringify(error),
},
],
function_call: { name: 'output_formatter' },
});

const outputParser = new JsonOutputFunctionsParser();
{ configurable: { sessionId } },
);
} else {
// messages.inputVariables;

// messages.inputVariables;
const prompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder('history'),
['human', '{question}'],
]);

const prompt = ChatPromptTemplate.fromMessages([
systemMessageFormatted,
['human', '{question} \n\n Error: {error}'],
]);
await chatMessageHistory.addMessage(new HumanMessage(`${text}`));

const chain = prompt.pipe(modelWithOutputParser).pipe(outputParser);
const chain = prompt.pipe(modelWithOutputParser).pipe(outputParser);

// const chainWithHistory = new RunnableWithMessageHistory({
// runnable: chain,
// getMessageHistory: async () => chatMessageHistory,
// inputMessagesKey: 'question',
// historyMessagesKey: 'history',
// });
const chainWithHistory = new RunnableWithMessageHistory({
runnable: chain,
getMessageHistory: async () => chatMessageHistory,
inputMessagesKey: 'question',
historyMessagesKey: 'history',
});

const chainStream = await chain.stream({
question: text ?? 'Please suggest solutions for the error below',
error: JSON.stringify(error),
});
chainStream = await chainWithHistory.stream(
{
question: error?.text ?? '',
},
{ configurable: { sessionId } },
);
}

let data = '';
try {
for await (const output of chainStream) {
// console.log('🚀 ~ AIController ~ forawait ~ output:', output);
res.write(JSON.stringify(output) + '\n');
data = JSON.stringify(output) + '\n';
res.write(data);
}
await chatMessageHistory.addMessage(new AIMessage(JSON.stringify(data)));
// console.log('Final messages: ', chatMessageHistory.getMessages());
res.end('__END__');
} catch (err) {
Expand Down
2 changes: 1 addition & 1 deletion packages/cli/src/requests.ts
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ export type Schema = { type: SchemaType; key?: string; value: string | Schema[];
export interface AIDebugChatPayload {
text?: string;
sessionId: string;
error?: NodeError;
error?: NodeError & { text?: string };
schemas?: Array<{ node_name: string; schema: Schema }>;
nodes?: string[];
parameters?: IDataObject;
Expand Down

0 comments on commit 68e5c53

Please sign in to comment.