- Everyone is Typing...
- Posts
- đŹ Issue #14: Depression
đŹ Issue #14: Depression
Neither robot nor researcher is immune to the blues - so let's fix it.
Friday achieved. Letâs go.
Last week we covered how AI researchers are feeling massive pressure in the wake of OpenAI. This isnât a new phenomenon to careers in any hot field, but right now the stress laser is zeroed in on AI academics as many of them feel they must urgently choose between science and profit.
In a recently published academic paper titled âChoose Your Weapon: Survival Strategies for Depressed AI Academicsâ I learned that it is actually possible to make even a university research paper both 1. acutely useful and 2. hilarious, so I summed up all ~6,000 words for you here:
(And no, I didnât sum this up with ChatGPT. I tried but it didnât like the input and I run away from hard things.)
The problem, as stated: You used to be able to do AI research on a couple of GPUs in a lab, but massive computing power, massive datasets, and massive megabucks objectively make for better results. So how do we compete with a college budget when OpenAI has hundreds of millions of dollars?
Some solutions: (Now, pay attention, this isnât just AI talk. This is âhow to survive in any super competitive industryâ stuff.)
Give up! â Ha. True to the title re: depression. You can always throw in the towel and âgive up on doing things that are really impactful.â But, psych, this one is actually a challenge because youâre already in the game, so you might as well keep going. You signed up to do hard things and youâre more than capable.
Try scaling anyway - AKA âLetâs go tilting at windmills!â The representative amount of money presented here as the average sum available to university researchers is $50k which can now be efficiently spent on cloud computing vs. say, bolting together a bunch of gaming PCs to run your models. Can you do a model as large as ChatGPT? Hell naw. Can you do something? Absolutely.
Scale down - Focus on âtoyâ problems, something simple yet representative. Take your big meaty huge project and find the smallest important thing you can pull out and test individually. The media loves huge projects, but weâre in the progress business, arenât we?
Reuse and remaster - Or, as one plucky newsletter author will say, steal like an artist. There are tons of open models out there just for you! Adopt large parts of what works and focus on your thing. Donât be allergic to âNot Invented Hereâ syndrome.
Analysis instead of synthesis - Take an existing model (or whatever the large work unit in your field is) and analyze it vs. trying to come up with a whole one on your own. We have lots of new models that produce lots of incredible outputs, but we donât really know how they work. Itâs a black box. So be a pal and figure it out so the rest of us know.
âRL! No Data!â - Translation: Large Language Models (LLMs) need massive quantities of data and massive truckloads of money to complete whereas Reinforcement Learning (RL) needs only the latter. (Nothing is free, little one.) Not needing massive data is a wonderful thing if youâre working to make some progress.
âSmall Models! No Compute!â - âThink of the smallest possible models that are capable of solving a problem or completing a task.â This is a great takeaway for any work ever. You probably have a vision in your head for this big awesome thing thatâs going to snap your industry over your knee because itâs just so cool and huge, but lots of things still succeed on a much smaller scale. Things like âEdge AIâ just live in the moment so no dataset is required. Donât write this one off just because itâs nerdy AI stuff â you probably have ways to simplify your own thing too.
Work on specialized application areas or domains - aka niche down and win. Find something too small for mega-industry to care about.
Solve problems few care about (for now!) - Pull your head up and look around. What fields or sub-fields arenât sexy yet but have potential in your mind? What do normal non-researcher people care about in the real world? Thatâs where the next horizon lies.
Try things that shouldnât work - Big company must do thing that always work. So try thing that probably definitely not work and maybe win by surprise.
Do things that have bad optics - Basically, the bigger a company is the more they care about how something looks to the media and general public. So just limited your constraints to âthe law and your own personalityâ and do some wild stuff. The field is far, far more open for you to create than it is for a stuffy corporation obsessed with PR.
Start it up; spin it out! - Classic tech academia-to-industry path: crack the nut and spin it out of your lab. Commercializing your research gets you out into the field and possibly towards the resources you need to do that Big Thing you feel you must do. Will this screw with your research career? Probably. But if youâve got the mettle, this is a major option.
Collaborate or jump ship! - Partner with a big university or organization to make your dreams happen. Free lunch is a powerful drug and mid-level executives are not immune. Charm and reason your way into those compute cycles.
How can large players in the industry help? - See above.
How can universities help? - See above again. Lunch. Charm.
To summarize this whole paper itâs basically this: Relax a little. Keep it simple. Keep it moving. Youâre in a wonderfully fruitful position if you understand AI anything right now. The resources are out there in many forms, you may just need to ask nicely.
But really you should just go read the whole paper here, itâs great: https://arxiv.org/abs/2304.06035
Good luck out there, AI researchers (and the rest of us mere mortals.)
ON THE INTERNETS
TWEET OF THE WEEK
Benadryl be like, you got allergies? No prob, hereâs a coma.
â Missy Baker (@TheMissyBaker)
4:22 AM ⢠Apr 4, 2023
See ya next week
â đŹ The EiT Crew at Status Hero