- 27 Posts
- 4 Comments
Joined 2 years ago
Cake day: June 10th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
Blaed@lemmy.worldOPto Technology@lemmy.ml•Introducing OpenLLaMA: An Open-Source Reproduction of Meta's LLaMAEnglish1·2 years agoThanks for sharing this!
Blaed@lemmy.worldOPto Technology@lemmy.ml•Introducing OpenLLaMA: An Open-Source Reproduction of Meta's LLaMAEnglish1·2 years agoGood bot, I will do that next time.
Blaed@lemmy.worldOPto Technology@lemmy.ml•Introducing OpenLLaMA: An Open-Source Reproduction of Meta's LLaMAEnglish3·2 years agoCome hangout with us at [email protected]
I run this show solo at the moment, but do my best to keep everyone informed. I have much more content on the horizon. Would love to have you if we have what you’re looking for.
FOSAI Posts:
I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.
Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.
Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.
Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).