{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":627561173,"defaultBranch":"main","name":"web-llm","ownerLogin":"mlc-ai","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-04-13T18:11:59.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/106173866?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1711385918.0","currentOid":""},"activityList":{"items":[{"before":"e8406e2e8e404e724e0319f00b20409fbfeceeb3","after":"0bab1f44fbbe5850a67adddbcf99d33c742b17dd","ref":"refs/heads/main","pushedAt":"2024-05-23T04:53:11.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[WebGPU] Refine no WebGPU error message (#417)\n\nThis PR refines the error message if no WebGPU is available and unifies\r\nthe message with TVM one.\r\n\r\nhttps://github.com/apache/tvm/pull/17021","shortMessageHtmlLink":"[WebGPU] Refine no WebGPU error message (#417)"}},{"before":"e7c0806f830eb26650e6b87a5a1aba4d0408fe4e","after":"e8406e2e8e404e724e0319f00b20409fbfeceeb3","ref":"refs/heads/main","pushedAt":"2024-05-23T01:01:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"[ServiceWorker] Refine error message and fix lint warning (#418)\n\nThis PR \r\n\r\n- refines the error message in service worker initialization to make it\r\nmore user-friendly;\r\n- resolves or suppresses ESLint warnings","shortMessageHtmlLink":"[ServiceWorker] Refine error message and fix lint warning (#418)"}},{"before":"373545b74eef6beafed8d3af5ecad33c5d19bd87","after":"e7c0806f830eb26650e6b87a5a1aba4d0408fe4e","ref":"refs/heads/main","pushedAt":"2024-05-23T01:00:47.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"[ServiceWorker] Add verbose mode for debugging (#410)\n\nUsing service worker sometimes is tricky, especially when you are trying\r\nto debug an issue. This PR adds additional `debug` level logs for\r\nprinting the entire event trace for easier debugging.\r\n\r\nBroswer console in default logging level:\r\n\"Screenshot\r\n\r\nBroswer console in verbose logging level:\r\n\"Screenshot","shortMessageHtmlLink":"[ServiceWorker] Add verbose mode for debugging (#410)"}},{"before":"f228d0556facd6a68fd1aff16fd4fc113ddfbe16","after":"373545b74eef6beafed8d3af5ecad33c5d19bd87","ref":"refs/heads/main","pushedAt":"2024-05-22T23:37:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Version] Bump version to 0.2.38 (#416)\n\n### Changes\r\nThe only two changes are:\r\n\r\n- https://github.com/mlc-ai/web-llm/pull/413\r\n- https://github.com/mlc-ai/web-llm/pull/415\r\n- Fixes `index.js.map` issue in Vite reported in\r\nhttps://github.com/mlc-ai/web-llm/issues/414\r\n\r\n### WASM Version\r\nv0_2_34 as no change is required.\r\n\r\n### TVMjs\r\nTVMjs compiled at\r\nhttps://github.com/apache/tvm/commit/a5862a5c696a3237f644f31bc312aae303213f3f,\r\nwith no change","shortMessageHtmlLink":"[Version] Bump version to 0.2.38 (#416)"}},{"before":"7783e5ac24a965cff26151d9714edcc118eeb142","after":"f228d0556facd6a68fd1aff16fd4fc113ddfbe16","ref":"refs/heads/main","pushedAt":"2024-05-22T22:55:43.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"Update package-lock.json","shortMessageHtmlLink":"Update package-lock.json"}},{"before":"67f9219e9909a5c18ca1fce785d70813dfd48ff8","after":"7783e5ac24a965cff26151d9714edcc118eeb142","ref":"refs/heads/main","pushedAt":"2024-05-22T22:50:33.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Fix][Vite] Fix error for index.js.map (#415)\n\nAddress issue https://github.com/mlc-ai/web-llm/issues/414. Since\r\n`index.js.map` is a json, when specifying string, we should use `\\\"`\r\ninstead of `\"`.\r\n\r\nPrior to this PR, we populate `const performanceNode =\r\n\"MLC_DUMMY_REQUIRE_VAR\"`, this PR changes it to `const performanceNode =\r\n\\\"MLC_DUMMY_REQUIRE_VAR\\\"`","shortMessageHtmlLink":"[Fix][Vite] Fix error for index.js.map (#415)"}},{"before":"d7fe670c9707d5223be9a092e39404448ab40900","after":"67f9219e9909a5c18ca1fce785d70813dfd48ff8","ref":"refs/heads/main","pushedAt":"2024-05-22T19:45:44.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Message] Update error messages to be more detailed (#413)\n\nThis PR updates error messages in `engine.ts` to ensure they are clear,\r\nintuitive, and more detailed.","shortMessageHtmlLink":"[Message] Update error messages to be more detailed (#413)"}},{"before":"fe89f36604809eb40f57f6cbf62229bf87725c99","after":"d7fe670c9707d5223be9a092e39404448ab40900","ref":"refs/heads/main","pushedAt":"2024-05-22T15:30:15.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"[Style] Add GitHub action for linter and pre-commit hook formater (#412)\n\n### Overview\r\n\r\nCurrently this repo doesn't have a formatter setup. Besides, even though\r\nwe have ESLint rule set up, these rules are not currently enforced in\r\nthis repo.\r\n\r\nThis PR adds the following to ensure better code quality:\r\n\r\n- Add `prettier` for code auto formatting\r\n- Run prettier format on every commit using pre-commit hook using\r\n`husky`\r\n- Add GitHub actions for automatically checks ESLint and Prettier on PRs\r\nand pushes\r\n\r\nAll other changes are just auto formatting and can be safely ignored.\r\n\r\n### Testing\r\n\r\nPre-commit hooks:\r\n```\r\n git commit -m \"[Style] Add GitHub action for linter and pre-commit hook formater\" \r\n\r\n✔ Preparing lint-staged...\r\n✔ Running tasks for staged files...\r\n✔ Applying modifications from tasks...\r\n✔ Cleaning up temporary files...\r\n```","shortMessageHtmlLink":"[Style] Add GitHub action for linter and pre-commit hook formater (#412)"}},{"before":"1d7c39b2414973c456c056bb6e487d58a71c6bbb","after":"fe89f36604809eb40f57f6cbf62229bf87725c99","ref":"refs/heads/main","pushedAt":"2024-05-22T04:17:59.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Version] Bump version to 0.2.37 (#409)\n\nThis version bump is breaking as we renamed APIs.\r\n\r\n### Changes\r\nThe only two changes are:\r\n- Renamed all `Engine` to `MLCEngine`\r\n - https://github.com/mlc-ai/web-llm/pull/408\r\n- See\r\nhttps://github.com/mlc-ai/web-llm/pull/408/files#diff-a2a171449d862fe29692ce031981047d7ab755ae7f84c707aef80701b3ea0c80\r\nfo the full list of changes\r\n- To accommodate this breaking change, control-F `Engine` (with same\r\ncapitalization) and replace with `MLCEngine` should suffice\r\n- Renamed ServiceWorker classes:\r\n - https://github.com/mlc-ai/web-llm/pull/403\r\n - WebServiceWorker -> ServiceWorker\r\n - ServiceWorker -> ExtensionServiceWorker\r\n\r\n### WASM Version\r\nv0_2_34 as no change is required.\r\n\r\n### TVMjs\r\nTVMjs compiled at\r\nhttps://github.com/apache/tvm/commit/a5862a5c696a3237f644f31bc312aae303213f3f,\r\nwith no change","shortMessageHtmlLink":"[Version] Bump version to 0.2.37 (#409)"}},{"before":"c6998f29640fd86689fc64b103b92faecb292a78","after":"1d7c39b2414973c456c056bb6e487d58a71c6bbb","ref":"refs/heads/main","pushedAt":"2024-05-22T04:07:08.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Refactor] Rename Engine to MLCEngine (#408)\n\nWe rename all `Engine` to `MLCEngine` to:\r\n1. Match naming scheme in https://github.com/mlc-ai/mlc-llm\r\n2. Better demonstrate the semantics of an `Engine`, especially when\r\nusers do wildcard importing\r\n\r\nTo accommodate this breaking change, control-F `Engine` (with same\r\ncapitalization) and replace with `MLCEngine` should suffice","shortMessageHtmlLink":"[Refactor] Rename Engine to MLCEngine (#408)"}},{"before":"73ccf797bad71735a5c5d9b8aba5248299971e59","after":"c6998f29640fd86689fc64b103b92faecb292a78","ref":"refs/heads/main","pushedAt":"2024-05-22T03:16:35.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Rename] Rename service worker export classes (#403)\n\nThis PR renames the following classes in export to better align with\r\ntheir well-known names.\r\n\r\n- WebServiceWorker -> ServiceWorker\r\n https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API\r\n- ServiceWorker -> ExtensionServiceWorker\r\n\r\nhttps://developer.chrome.com/docs/extensions/develop/concepts/service-workers\r\n\r\nNote, this is a **breaking** change for existing extension service\r\nworker engine users. Therefore, please merge with caution.","shortMessageHtmlLink":"[Rename] Rename service worker export classes (#403)"}},{"before":"ec6507dd92253f9d834ad2cf5e8a4556bfd2d52e","after":"73ccf797bad71735a5c5d9b8aba5248299971e59","ref":"refs/heads/main","pushedAt":"2024-05-21T22:29:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[README] Include CDN delivery example","shortMessageHtmlLink":"[README] Include CDN delivery example"}},{"before":"cddc4768928ee4150416e7b056fee158bcac9de7","after":"e54f4908fda8860340105b62d9d11dd7b0968299","ref":"refs/heads/gh-pages","pushedAt":"2024-05-21T21:12:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"Build at Tue 21 May 2024 05:12:22 PM EDT","shortMessageHtmlLink":"Build at Tue 21 May 2024 05:12:22 PM EDT"}},{"before":"b762bf4efcda0258c8f22d2c884038a66204891c","after":"ec6507dd92253f9d834ad2cf5e8a4556bfd2d52e","ref":"refs/heads/main","pushedAt":"2024-05-21T21:11:37.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Version] Bump version to 0.2.36 (#407)\n\n### Changes\r\nMain changes include:\r\n- New model `Hermes-2-Pro-Mistral-7B` in `prebuiltAppConfig` via:\r\n - https://github.com/mlc-ai/web-llm/pull/390\r\n- Various `index.js` and `index.js.map` post-processings to resolve\r\nfrontend compatibility issues with `require()` and `perf_hoooks`\r\n - https://github.com/mlc-ai/web-llm/pull/397\r\n - https://github.com/mlc-ai/web-llm/pull/406\r\n- Catch WebGPU OOM error upon `reload()` and `CreateEngine()`:\r\n - https://github.com/mlc-ai/web-llm/pull/402\r\n- Service Worker support (in addition to Extension Service Worker):\r\n - https://github.com/mlc-ai/web-llm/pull/395\r\n - https://github.com/mlc-ai/web-llm/pull/400\r\n - https://github.com/mlc-ai/web-llm/pull/401\r\n\r\n### WASM Version\r\nv0_2_34 as no change is required.\r\n\r\n### TVMjs\r\nTVMjs compiled at\r\nhttps://github.com/apache/tvm/commit/a5862a5c696a3237f644f31bc312aae303213f3f,\r\nwith only one change in `tvm/web`:\r\nhttps://github.com/apache/tvm/pull/17005","shortMessageHtmlLink":"[Version] Bump version to 0.2.36 (#407)"}},{"before":"3481fed648475ea87d25d656369ac4bec116317e","after":"b762bf4efcda0258c8f22d2c884038a66204891c","ref":"refs/heads/main","pushedAt":"2024-05-21T20:25:35.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Device] Catch WebGPU OOM error (#402)\n\nPrior to this PR, when users `createEngine()` or call `reload()` with a\r\nmodel that is too large for the device, likely the device would keep\r\ngenerating, ignoring OOM issue and correctness. See\r\nhttps://github.com/mlc-ai/web-llm/issues/356 and\r\nhttps://github.com/mlc-ai/web-llm/issues/209.\r\n\r\nThis PR catches such error with `device.lost.then()`, depending on tvmjs\r\nto call `device.destroy()` upon detecting error in `createBuffer()` via\r\nhttps://github.com/apache/tvm/pull/17005.\r\n\r\nWe have only observed `createBuffer()` errors and hence will only\r\nprocess such kind of errors for now. Besides, since most OOM errors\r\noccur in `reload()`, we make the error handling synchronous despite\r\nusing `.then()` by throwing the error at the end of `reload()` if there\r\nis one.","shortMessageHtmlLink":"[Device] Catch WebGPU OOM error (#402)"}},{"before":"8f91d01dcbe2f0580cc7c2f46a420a021627c111","after":"3481fed648475ea87d25d656369ac4bec116317e","ref":"refs/heads/main","pushedAt":"2024-05-21T20:24:20.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Fix] Remove perf hooks import from index.js (#406)\n\nReplace `import require$$3 from 'perf_hooks';` with `const require$$3 =\r\n\"MLC_DUMMY_REQUIRE_VAR\"` in `index.js`.\r\n\r\nWe use a dummy string because we should not reach to [this branch in\r\ntvmjs](https://github.com/apache/tvm/blob/a5862a5c696a3237f644f31bc312aae303213f3f/web/src/compact.ts#L29)\r\nwhich is for nodejs.\r\n\r\nThis should address https://github.com/mlc-ai/web-llm/issues/258 and\r\nhttps://github.com/mlc-ai/web-llm/issues/127","shortMessageHtmlLink":"[Fix] Remove perf hooks import from index.js (#406)"}},{"before":"aed3a62b714c07f77ce5e4fe2485725da9d27e75","after":"8f91d01dcbe2f0580cc7c2f46a420a021627c111","ref":"refs/heads/main","pushedAt":"2024-05-20T15:10:01.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"chore: update types.ts (#404)\n\noveride -> override","shortMessageHtmlLink":"chore: update types.ts (#404)"}},{"before":"dd5ec5c25bb7c8524f65a47da16a8b45079c5dd9","after":"aed3a62b714c07f77ce5e4fe2485725da9d27e75","ref":"refs/heads/main","pushedAt":"2024-05-20T15:09:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"feat: added support Llama-3 on README file (#405)\n\nI added the model Llama-3 on README file of repository","shortMessageHtmlLink":"feat: added support Llama-3 on README file (#405)"}},{"before":"ad04a7d3238836ee51303d5680b73be07632c721","after":"dd5ec5c25bb7c8524f65a47da16a8b45079c5dd9","ref":"refs/heads/main","pushedAt":"2024-05-19T13:45:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"[Feature] Replace BroadcastChannel API with Client API in Service Worker (#401)\n\n## Overview\r\n\r\nThere are many different APIs available for the two-way communication\r\nbetween service worker and web pages. According to the suggestion of\r\nweb.dev, previously we used the simple BroadcastChannel API.\r\nhttps://web.dev/articles/two-way-communication-guide\r\n\r\nHowever, if the service worker has been killed by the browser, messages\r\nsent via the BroadcastChannel API cannot revive it. This causes unstable\r\nservice worker life while users are using the webapp.\r\n\r\nThis PR replaces BroadcastChannel API with the more primitive Client API\r\nas we found that messages sent via Client API will revive a stopped\r\nservice worker. This ensures the normal functioning of our keep-alive\r\nmechanism.\r\n\r\n## Primary Change\r\n\r\n- service_worker.ts:\r\n- Replace `BroadcastChannel` with Client API\r\n(`navigator.serviceWorker.controller.postMessage()` and\r\n`client.postMessage()`)\r\n- Add `clientRegistry` to remember mapping between incoming messages and\r\nclient ids\r\n - Add \r\n- Rename files (the export names are kept the same):\r\n - `web_service_worker.ts` -> `service_worker.ts`\r\n - `service_worker.ts` -> `extension_service_worker.ts`\r\n\r\n## Testing\r\n\r\n- https://chat.webllm.ai/\r\n- `examples/service-worker`\r\n\r\nThe chat webapp is able to correctly keeping service worker alive after\r\nthis change.","shortMessageHtmlLink":"[Feature] Replace BroadcastChannel API with Client API in Service Wor…"}},{"before":"4d1915efe3579f9cac7f29dfb77ad2a6025503e7","after":"ad04a7d3238836ee51303d5680b73be07632c721","ref":"refs/heads/main","pushedAt":"2024-05-17T08:17:52.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Neet-Nestor","name":"Nestor Qin","path":"/Neet-Nestor","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23090573?s=80&v=4"},"commit":{"message":"[Function] Add heartbeat to service worker (#400)\n\n## Overview\r\nThis PR adds heartbeat event in web service worker so that the client\r\ncan monitor its status and respond correspondingly.\r\n\r\n## Primary Changes\r\n- Update `{ type: \"keepAlive\" }` event to `{ kind: \"keepAlive\" }` to\r\nkeep all event format consistent\r\n- Add heartbeat event in web service worker handler to report back its\r\nstatus\r\n\r\n## Testing\r\n\"Screenshot","shortMessageHtmlLink":"[Function] Add heartbeat to service worker (#400)"}},{"before":"3379591964a1aaa546c335f9d072aa0b0df08d52","after":"4d1915efe3579f9cac7f29dfb77ad2a6025503e7","ref":"refs/heads/main","pushedAt":"2024-05-15T02:59:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Neet-Nestor","name":"Nestor Qin","path":"/Neet-Nestor","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23090573?s=80&v=4"},"commit":{"message":"[Fix] Avoid unnecessary engine reload by correctly comparing ChatOption and AppConfig objects (#399)\n\n## Overview\r\nCurrently, even the client is initializing the worker engine with the\r\nexact same configurations, the worker will not correctly recognize this\r\nbut instead it will unnecessarily re-initialize itself. The root cause\r\nis due to the use of `===` to compare object equity which is actually\r\ncomparing object reference equity instead of value equity.\r\n\r\nThis PR fixed it by create utility functions for deep comparing the\r\n**VALUE** of these config objects. The code is tedious and thus I\r\ngenerated using AI models.\r\n\r\n## Test\r\nTested on https://chat.neet.coffee with the following code added to\r\n`web_service_worker.ts`.\r\n\r\n```typescript\r\n console.log(\"modelId same? \" + this.modelId === params.modelId);\r\n console.log(\"chatOpts same? \" + areChatOptionsEqual(this.chatOpts, params.chatOpts));\r\n console.log(\"appConfig same? \" + areAppConfigsEqual(this.appConfig, params.appConfig));\r\n```\r\n\r\nBefore the fix:\r\n```\r\nmodelId same? true\r\nchatOpts same? false\r\nappConfig same? false\r\n```\r\n\r\nAfter:\r\n```\r\nmodelId same? true\r\nchatOpts same? true\r\nappConfig same? true\r\nAlready loaded the model. Skip loading\r\n```","shortMessageHtmlLink":"[Fix] Avoid unnecessary engine reload by correctly comparing ChatOpti…"}},{"before":"5a16c86388dee25716fd3f551dfb57b3478b4f90","after":"3379591964a1aaa546c335f9d072aa0b0df08d52","ref":"refs/heads/main","pushedAt":"2024-05-14T18:53:22.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"docs: typo (#379)","shortMessageHtmlLink":"docs: typo (#379)"}},{"before":"e891078bd6ca839ef9e72a6056173bf0d7d9ede3","after":"5a16c86388dee25716fd3f551dfb57b3478b4f90","ref":"refs/heads/main","pushedAt":"2024-05-14T12:51:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"Add Service Worker support and example (#395)\n\nThis PR adds support for [Service\r\nWorker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API).\r\nUnless the extension service worker currently in the repo that is used\r\nfor Chrome extensions only, this is the generic service worker API that\r\ncan be used for any web application.\r\n\r\nTo avoid change names of existing classes, I keep the original service\r\nworker file the same and named the new ones `WebServiceWorkerXXX`.\r\n\r\nThis PR also includes an example for running Web-LLM engine using\r\nService Worker API and updates the README as well.\r\n\r\n\"Screenshot","shortMessageHtmlLink":"Add Service Worker support and example (#395)"}},{"before":"a27b837dee879bd6a7ad3bf047613789ed13f193","after":"e891078bd6ca839ef9e72a6056173bf0d7d9ede3","ref":"refs/heads/main","pushedAt":"2024-05-14T12:46:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tqchen","name":"Tianqi Chen","path":"/tqchen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2577440?s=80&v=4"},"commit":{"message":"[Next][Fix] Remove require() in index.js for scriptDir (#397)\n\n### Overview\r\nThis PR post-processes web-llm's `index.js` by replacing all `new\r\n(require('u' + 'rl').URL)('file:' + __filename).href` with\r\n`\"MLC_DUMMY_PATH\"`:\r\n\r\n```javascript\r\nvar _scriptDir = \r\n(typeof document === 'undefined' && typeof location === 'undefined' ? \r\n \"MLC_DUMMY_PATH\": \r\n typeof document === 'undefined' ? \r\n location.href : \r\n (document.currentScript && document.currentScript.src || new URL('index.js', document.baseURI).href)\r\n);\r\n```\r\nwhich previously would raise error as shown in\r\nhttps://github.com/mlc-ai/web-llm/issues/383 and other issues.\r\n\r\nOther occurrences of `\"MLC_DUMMY_PATH\"` are only in branches of `if\r\n(ENVIRONMENT_IS_NODE)` in runtime, which we do not consider / support as\r\nof now.\r\n\r\nWe use `\"MLC_DUMMY_PATH\"` instead of `null` because we do not expect the\r\nvalue to be used at all. If that is not the case, it would be easier to\r\ndebug with `\"MLC_DUMMY_PATH\"`.\r\n\r\n### Details\r\nWhen building projects that use web-llm with `next` (e.g.\r\n`examples/next-simple-chat`), the **compile time** would complain about\r\nthe call for `require()`; runtime does not run into it because\r\n`document` is not `undefined` when evaluating `_scriptDir`. Other\r\nexamples, like `examples/chrome-extension`, do not have this issue\r\nbecause they build with `parcel`, which would fix it for us with\r\n`@parcel/resolver-default`:\r\n\r\n![image](https://github.com/mlc-ai/web-llm/assets/53290280/0b9df99a-f80e-4fed-8c19-88deb8aabfbd)\r\n\r\nThis PR's fix does not affect correctness because, by inspecting\r\n`index.js`, `_scriptDir` is used to populate `scriptDirectory`, which is\r\nused in the function `locateFile()`, which currently is only used for\r\n`wasmBinaryFile` (but `isDataURI(wasmBinaryFile)` would never evaluate\r\nto `false`):\r\n\r\n```javascript\r\nfunction locateFile(path) {\r\n if (Module[\"locateFile\"]) { return Module[\"locateFile\"](path, scriptDirectory) }\r\n return scriptDirectory + path\r\n}\r\n\r\nif (!isDataURI(wasmBinaryFile)) {\r\n wasmBinaryFile = locateFile(wasmBinaryFile);\r\n}\r\n```\r\n\r\nWe also do not remove other `require()` in `index.js` as of now, as from\r\nthe current understanding, they would not cause issues -- but we can\r\ncome back later when they do.\r\n\r\nOne observation that is not yet explainable is that, if we set\r\n`\"@mlc-ai/web-llm\": \"^0.2.35\",` in\r\n`examples/next-simple-chat/package.json`,\r\nhttps://github.com/mlc-ai/web-llm/issues/383 would be observed. However,\r\nif we use `\"@mlc-ai/web-llm\": \"../..\",`, no issue is observed -- we are\r\nable to use `require()` in compile time.","shortMessageHtmlLink":"[Next][Fix] Remove require() in index.js for scriptDir (#397)"}},{"before":"ce725b6422a55d4125c7ed05d729e17a5e65aff9","after":"a27b837dee879bd6a7ad3bf047613789ed13f193","ref":"refs/heads/main","pushedAt":"2024-05-14T07:32:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[JSON] Add Hermes-2-Pro function calling example with JSON schema (#390)\n\nThis PR adds a function calling example with JSON schema, using\r\nHermes-2-Pro-Mistral-7B model.\r\n\r\nThe system prompt is adapted from:\r\nhttps://github.com/NousResearch/Hermes-Function-Calling/blob/main/prompt_assets/sys_prompt.yml#L29\r\n\r\nQuery:\r\n```\r\nWhat is the current weather in celsius in Pittsburgh and Tokyo?\r\n```\r\n\r\nOutput JSON schema:\r\n```\r\nconst T = Type.Object({\r\n tool_calls: Type.Array(\r\n Type.Object({\r\n arguments: Type.Any(),\r\n name: Type.String(),\r\n })\r\n )\r\n});\r\n```\r\n\r\nGenerated output:\r\n```\r\n{\"tool_calls\": [{\"arguments\": {\"location\": \"Pittsburgh, PA\", \"unit\": \"celsius\"}, \"name\": \"get_current_weather\"}, {\"arguments\": {\"location\": \"Tokyo, Japan\", \"unit\": \"celsius\"}, \"name\": \"get_current_weather\"}]}\r\n```","shortMessageHtmlLink":"[JSON] Add Hermes-2-Pro function calling example with JSON schema (#390)"}},{"before":"8ebe301b18f1dd31cb19050c61286e0f14b9fcba","after":"cddc4768928ee4150416e7b056fee158bcac9de7","ref":"refs/heads/gh-pages","pushedAt":"2024-04-22T17:09:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"Build at Mon 22 Apr 2024 01:09:16 PM EDT","shortMessageHtmlLink":"Build at Mon 22 Apr 2024 01:09:16 PM EDT"}},{"before":"6bcb7e16d6c2a79a02f9f6c6fca712baa7642a43","after":"ce725b6422a55d4125c7ed05d729e17a5e65aff9","ref":"refs/heads/main","pushedAt":"2024-04-22T17:07:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Version] Bump version to 0.2.35 (#378)\n\n### Changes\r\nVery minor change from 0.2.34, only slightly updated prebuilt config\r\n- https://github.com/mlc-ai/web-llm/pull/377\r\n\r\n### WASM Version\r\n`v0_2_34` as no change is required.\r\n\r\n### TVMjs\r\nTVMjs compiled at\r\nhttps://github.com/apache/tvm/commit/57316dae1497c36ed57732a7a610018a990f1927","shortMessageHtmlLink":"[Version] Bump version to 0.2.35 (#378)"}},{"before":"834f93ec68058341cd6335a57e983fa7d8d9f517","after":"6bcb7e16d6c2a79a02f9f6c6fca712baa7642a43","ref":"refs/heads/main","pushedAt":"2024-04-22T17:02:08.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Prebuilt] Add 1k llama-2 q4f32_1 prebuilt (#377)\n\nAdd `-1k` prebuilt for `llama-2-q4f32_1`. Reorder prebuilt models such\r\nthat `-1k` models are defaulted in the demo page. Also add a note on\r\n`-1k` in demo site.","shortMessageHtmlLink":"[Prebuilt] Add 1k llama-2 q4f32_1 prebuilt (#377)"}},{"before":"cd4ea337388df81857c6141dc439130898bfcaa9","after":"834f93ec68058341cd6335a57e983fa7d8d9f517","ref":"refs/heads/main","pushedAt":"2024-04-22T04:23:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Version] Bump version to 0.2.34, update prebuilt WASM models (#375)\n\n### Changes\r\nMain changes include:\r\n- Support for JSON schema via https://github.com/mlc-ai/web-llm/pull/371\r\n\r\n### WASM Version\r\nAll WASMs are updated to `v0_2_34` reflect the change in MLC's runtime.\r\n\r\n### TVMjs\r\nTVMjs compiled at\r\nhttps://github.com/apache/tvm/commit/57316dae1497c36ed57732a7a610018a990f1927\r\n- Main change: https://github.com/apache/tvm/pull/16910\r\n\r\n### Note on wasm versioning\r\nWe updated all WASMs, as reflected by `modelVersion` in `src/config.ts`\r\n(reflected by the new folder `v0_2_34` in\r\n[binary-mlc-llm-libs](https://github.com/mlc-ai/binary-mlc-llm-libs/tree/main/web-llm-models))\r\nand hence the implicitly updated `webllm.prebuiltAppConfig`. See\r\nhttps://github.com/mlc-ai/binary-mlc-llm-libs/pull/118 on the commits of\r\nMLC and TVM when compiling these models.\r\n\r\n<0.2.34 users can still use WebLLM without breaking as we keep v0_2_30\r\nmodels, and the links bind to the npm. Users also do not need to clear\r\ncache since `0.2.34` models have a different","shortMessageHtmlLink":"[Version] Bump version to 0.2.34, update prebuilt WASM models (#375)"}},{"before":"c6e6f1c163907df593b57ee986ca834cdfd6e9f8","after":"cd4ea337388df81857c6141dc439130898bfcaa9","ref":"refs/heads/main","pushedAt":"2024-04-22T03:49:22.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"CharlieFRuan","name":"Charlie Ruan","path":"/CharlieFRuan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/53290280?s=80&v=4"},"commit":{"message":"[Grammar] Support json schema (#371)\n\nSupport JSON schema, and add example `examples/json-schema` to\r\ndemonstrate its usage. Require an update on all WASMs due to updates in\r\nMLC's wasm runtime. Will follow up with an npm version bump.\r\n\r\nNote that we only require `schema` to be a valid json schema string. So\r\nuser can pick their own way of generating a json schema. Here we use\r\n3rdparty tool `typebox`.\r\n\r\n```typescript\r\n// 1. Define a schema\r\nimport { Type, type Static } from '@sinclair/typebox'\r\nconst T = Type.Object({\r\n size: Type.Integer(),\r\n is_accepted: Type.Boolean(),\r\n num: Type.Number(),\r\n})\r\ntype T = Static;\r\nconst schema = JSON.stringify(T);\r\n\r\n// 2. Specify the schema in response_format\r\nconst engine: webllm.EngineInterface = await webllm.CreateEngine(\"Llama-2-7b-chat-hf-q4f16_1\");\r\nconst request: webllm.ChatCompletionRequest = {\r\n messages: [{\r\n \"role\": \"user\",\r\n \"content\": \"Generate a json containing three fields: an integer field named size, a \" +\r\n \"boolean field named is_accepted, and a float field named num.\"\r\n }],\r\n response_format: { type: \"json_object\", schema: schema } as webllm.ResponseFormat\r\n};\r\n\r\n// 3. Get output\r\nconst reply0 = await engine.chatCompletion(request);\r\nconsole.log(reply0);\r\nconsole.log(\"Output:\\n\" + await engine.getMessage());\r\n```","shortMessageHtmlLink":"[Grammar] Support json schema (#371)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEUZrAfgA","startCursor":null,"endCursor":null}},"title":"Activity · mlc-ai/web-llm"}