Issues: AkihikoWatanabe/paper_notes
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?, Zorik Gekhman+, N/A, arXiv'24
Pocket
#1308
opened May 20, 2024 by
AkihikoWatanabe
ReFT: Representation Finetuning for Language Models, Zhengxuan Wu+, N/A, arXiv'24
Pocket
#1307
opened May 18, 2024 by
AkihikoWatanabe
AirLLM, 2024.04
Efficiency/SpeedUp
LanguageModel
Library
NLP
Repository
#1297
opened Apr 28, 2024 by
AkihikoWatanabe
推薦・機械学習勉強会, Wantedly
Article
RecommenderSystems
Tutorial
#1295
opened Apr 26, 2024 by
AkihikoWatanabe
The End of Finetuning — with Jeremy Howard of Fast.ai, 2023.11
Article
Finetuning
Pretraining
#1294
opened Apr 26, 2024 by
AkihikoWatanabe
Compression Represents Intelligence Linearly, Yuzhen Huang+, N/A, arXiv'24
Pocket
#1288
opened Apr 17, 2024 by
AkihikoWatanabe
TransformerFAM: Feedback attention is working memory, Dongseong Hwang+, N/A, arXiv'24
Pocket
#1287
opened Apr 16, 2024 by
AkihikoWatanabe
Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws, Zeyuan Allen-Zhu+, N/A, arXiv'24
Pocket
#1286
opened Apr 15, 2024 by
AkihikoWatanabe
Knowledge Conflicts for LLMs: A Survey, Rongwu Xu+, N/A, arXiv'24
LanguageModel
NLP
Pocket
Survey
#1284
opened Apr 14, 2024 by
AkihikoWatanabe
Previous Next
ProTip!
no:milestone will show everything without a milestone.