Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the software has no reaction with no errors #94

Open
adambnn opened this issue Jun 24, 2023 · 0 comments
Open

the software has no reaction with no errors #94

adambnn opened this issue Jun 24, 2023 · 0 comments

Comments

@adambnn
Copy link

adambnn commented Jun 24, 2023

`import { LLM } from "llama-node";
import { LLamaCpp } from "llama-node/dist/llm/llama-cpp.js";
import path from "path";
import fs from 'fs';

process.on('unhandledRejection', error => {
console.error('Unhandled promise rejection:', error);
});
const model = path.resolve(process.cwd(), "../llama.cpp/models/13B/ggml-model-q4_0.bin");

if (!fs.existsSync(model)) {
console.error("Model file does not exist: ", model);
}
const llama = new LLM(LLamaCpp);
//console.log("model:", model)
const config = {
modelPath: model,
enableLogging: true,
nCtx: 1024,
seed: 0,
f16Kv: false,
logitsAll: false,
vocabOnly: false,
useMlock: false,
embedding: true,
useMmap: true,
nGpuLayers: 0
};
//console.log("config:", config)
const prompt = Who is the president of the United States?;
const params = {
nThreads: 4,
nTokPredict: 2048,
topK: 40,
topP: 0.1,
temp: 0.2,
repeatPenalty: 1.1,
prompt,
};
//console.log("params:", params)

try {
console.log("Loading model...");
await llama.load(config);
console.log("Model loaded");
} catch (error) {
console.error("Error loading model: ", error);
}

const response = await llama.createCompletion(params);
console.log(response)

const run = async () => {
try {
await llama.load(config);
console.log("load complete")
await llama.getEmbedding(params).then(console.log);
} catch (error) {
console.error("Error loading model or generating embeddings: ", error);
}
};
run();`

I added a lot thing to debug it and find that it ends in the lin 44: await llama.load(config); the sequence is just stopped there and the software terminated. no errors were caught.

Mac book pro with m1 max
mac os 13.4 (22F66)
node js v20.3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant