We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try { runnerInstance = new LLM(RwkvCpp); const loadConfig: LoadConfig = { enableLogging: true, nThreads: 4, ...loadConfigOverwrite, }; subscriber?.next({ message: 'prepared to load instance', ...loggerCommonMeta, meta: { ...loggerCommonMeta.meta, loadConfigOverwrite } }); await runnerInstance.load(loadConfig); subscriber?.next({ message: 'instance loaded', ...loggerCommonMeta }); return runnerInstance; } catch (error) { // error here is Error: Failed to initialize LLama context from file: /Users/linonetwo/Desktop/repo/TiddlyGit-Desktop/language-model-dev/llama.bin throw error; }
while the log in console is
error loading model: unrecognized tensor type 13 llama_init_from_file: failed to load model
I think this detailed message should be added to Error, so I can print it to user
The text was updated successfully, but these errors were encountered:
No branches or pull requests
while the log in console is
I think this detailed message should be added to Error, so I can print it to user
The text was updated successfully, but these errors were encountered: