Exciting energy for the @LTIatCMU large language model event! Come on out this weekend if you're around Pittsburgh… twitter.com/i/web/status/1…
Here are the slides for my kick-off talk, a high level overview of the exciting promise and current issues with lar… twitter.com/i/web/status/1…
Worried that organising AI festivals is peak AI hype? Want to work for an AI company with an actual clear business… twitter.com/i/web/status/1…
Yesterday, I covered the 3 classic ways to finetune LLMs. Let's now delve into parameter-efficient finetuning techn… twitter.com/i/web/status/1…
Checked AI Twitter after one week of vacation pic.twitter.com/WCXSn0ZvxN
I need an LLM everywhere meme.
LLaMA-Adapter: finetuning large language models (LLMs) like LLaMA and matching Alpaca's modeling performance with g… twitter.com/i/web/status/1…
My talk at Google Cloud Technical Series is out now. Discussing how AI and its embeddings replacing the existing st… twitter.com/i/web/status/1…
AI美女写真集ってアマゾンに無限にあることは知らなかった。
五島美術館の新年度最初の展覧会です。館蔵「春の優品展 古今和歌集を愛でる」(2023年4月1日(土)〜2023年5月7日(日))。例のごとく、4月29日(土)~5月7日(日)まで国宝「源氏物語絵巻 鈴虫一・鈴虫二・夕霧・御法」の特… twitter.com/i/web/status/1…
第1回はアルパカにします。
crfm.stanford.edu/2023/03/13/alp…
家庭内で週1ペースで輪講を行います。論文を読みます。
🔥Excited to release LLaMA-Adapter! With only 1.2M learnable parameters and 52K instruction data, LLaMA-Adapter turn… twitter.com/i/web/status/1…
Correction, it will be tomorrow (Thursday). twitter.com/lishali88/stat…
1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea.
I'm seeing many new app… twitter.com/i/web/status/1…
@MeganRisdal Just finished mine. 😂
New hardware-friendly blog post:
Finetuning Large Language Models On A Single GPU Using Gradient Accumulation
🔗… twitter.com/i/web/status/1…