Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C# bindings need to be updated #2256

Open
tameyerster opened this issue Apr 24, 2024 · 2 comments
Open

C# bindings need to be updated #2256

tameyerster opened this issue Apr 24, 2024 · 2 comments
Labels
bindings gpt4all-binding issues bug-unconfirmed csharp-bindings gpt4all-bindings C# specific issues

Comments

@tameyerster
Copy link

Bug Report
Exception on Prompt callback
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

Example Code

using Gpt4All;

var modelFactory = new Gpt4AllModelFactory();
if (args.Length < 2)
{
Console.WriteLine($"Usage: Gpt4All.Samples ");
return;
}

var modelPath = args[0];
var prompt = args[1];

using var model = modelFactory.LoadModel(modelPath);

var result = await model.GetStreamingPredictionAsync(
prompt,
PredictRequestOptions.Defaults);

//callback to prompt happens just after bolded code
await foreach (var token in result.GetPredictionStreamingAsync())
{
Console.Write(token);
}

Steps to Recreate

  1. Downloaded latest GPT4All from master (4/23/2024) commit id: baf1dfc
  2. Built the code using the powershell script: build_win-msvc.ps1 (win-x64) using the developer powershell window in VStudio
  3. Open the GPT4All solution in Visual Studio 2022 17.9.6 64 bit and build as Debug x64
  4. Once built I copy the binding DLLs to the debug bin directory
  5. Start debug mode passing in the full path to the model and a simple prompt (What is the capital of florida?)
  6. Model appears to load fine
  7. Exception thrown in LLModel.cs: NativeMethods.llmodel_prompt
    Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
    Repeat 2 times:

      at Gpt4All.Bindings.NativeMethods.llmodel_prompt(IntPtr, System.String, LlmodelPromptCallback, LlmodelResponseCallback, LlmodelRecalculateCallback, Gpt4All.Bindings.llmodel_prompt_context ByRef)
    

      at Gpt4All.Bindings.LLModel.Prompt(System.String, Gpt4All.Bindings.LLModelPromptContext, System.Func`2<Gpt4All.Bindings.ModelPromptEventArgs,Boolean>, System.Func`2<Gpt4All.Bindings.ModelResponseEventArgs,Boolean>, System.Func`2<Gpt4All.Bindings.ModelRecalculatingEventArgs,Boolean>, System.Threading.CancellationToken)
      at Gpt4All.Gpt4All+<>c__DisplayClass10_0.<GetStreamingPredictionAsync>b__0()
      at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
      at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
      at System.Threading.ThreadPoolWorkQueue.Dispatch()
      at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()
    

Expected Behavior:
The model response written to the console

Environment:
latest GPT4All from master, Model: gpt4all-13b-snoozy-q4_0.gguf, Windows 11, The current console Sample, Core 8, .NET 8

I saw others had this exception but during Model loads. Let me know what additional info might help or even some tip on debugging it. The handle passed into Prompt seems valid.

@tameyerster tameyerster added bindings gpt4all-binding issues bug-unconfirmed labels Apr 24, 2024
@tameyerster
Copy link
Author

tameyerster commented Apr 24, 2024

debugging into the C++ code the exception is at
if (size_t(ctx->n_past) < wrapper->promptContext.tokens.size())
in
void llmodel_prompt(llmodel_model model, const char *prompt,
*const char prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
llmodel_recalculate_callback recalculate_callback,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply)

Viewing ctx in a watch window gives a cant read memory for the values.
This is passed in from csharp as context.UnderlyingContext which has valid values prior to getting passed.
Seems like a code mismatch between the C method and the binding code I have. prompt_template is not a parm in the CSharp code. Neither is special.

@tameyerster
Copy link
Author

tameyerster commented Apr 25, 2024

Looks like the csharp bindings in main are not in sync with the changes in the C code around prompt parameters.
Specially the prompt_template and special flag.

Pulling down the latest release version

@cebtenzzre cebtenzzre changed the title CSharp Binding AccessViolationException: Attempted to read or write protected memory. in LLModel Prompt Method C# bindings need to be updated May 1, 2024
@cebtenzzre cebtenzzre reopened this May 1, 2024
@cebtenzzre cebtenzzre added the csharp-bindings gpt4all-bindings C# specific issues label May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bindings gpt4all-binding issues bug-unconfirmed csharp-bindings gpt4all-bindings C# specific issues
Projects
None yet
Development

No branches or pull requests

2 participants