Software developer, racing fan
1093 stories
·
101 followers

Cognitive load is what matters

2 Shares

Cognitive load is what matters

Excellent living document (the underlying repo has 625 commits since being created in May 2023) maintained by Artem Zakirullin about minimizing the cognitive load needed to understand and maintain software.

This all rings very true to me. I judge the quality of a piece of code by how easy it is to change, and anything that causes me to take on more cognitive load - unraveling a class hierarchy, reading though dozens of tiny methods - reduces the quality of the code by that metric.

Lots of accumulated snippets of wisdom in this one.

Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.

Via @karpathy

Tags: programming, software-engineering

Read the whole story
vitormazzi
23 days ago
reply
Brasil
Share this story
Delete

Health Insurance Trolley

1 Comment and 4 Shares
PERSON:
Read the whole story
vitormazzi
26 days ago
reply
Brasil
Share this story
Delete
1 public comment
jlvanderzwan
22 days ago
reply
"The needs of the many outweigh the needs of the few" may have its problems and counter-arguments, but I don't think "imagine that "the few" are literally Hitler" is one of them

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable

1 Comment and 2 Shares

Investment giant Goldman Sachs published a research paper about the economic viability of generative AI which notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” 

The paper, called “Gen AI: too much spend, too little benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, MIT professor Daron Acemoglu, and infrastructure experts. The paper ultimately questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on, but says investors may continue to get rich anyway. “Despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst,” the paper notes. 

Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity (which necessarily means automation, layoffs, lower labor costs, and higher efficiency). These stock gains are already baked in, Goldman Sachs argues in the paper: “Although the productivity pick-up that AI promises could benefit equities via higher profit growth, we find that stocks often anticipate higher productivity growth before it materializes, raising the risk of overpaying. And using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.” (Ed Zitron also has a thorough writeup of the Goldman Sachs report over at Where's Your Ed At.)

It adds that “outside of the most bullish AI scenario that includes a material improvement to the structural growth/inflation mix and peak US corporate profitability, we forecast that S&P 500 returns would be below their post-1950 average. AI’s impact on corporate profitability will matter critically.”

"Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks"

What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all. Meanwhile, their stock prices are skyrocketing based on all of this hype and investment, which may not ultimately change much of anything at all.

Acemoglu, the MIT professor, told Goldman that the industry is banking on the idea that largely scaling the amount of AI training data—which may not actually be possible given the massive amount of training data already ingested—is going to solve some of generative AI’s growing pains and problems. But there is no evidence that this will actually be the case: “What does a doubling of data really mean, and what can it achieve? Including twice as much data from Reddit into the next version of GPT may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representative’s ability to help a customer troubleshoot problems with their video service,” he said. “The quality of the data also matters, and it’s not clear where more high-quality data will come from and whether it will be easily and cheaply available to AI models.” He also posits that large language models themselves “may have limitations” and that the current architecture of today’s AI products may not get measurably better. 

Jim Covello, who is Goldman Sachs’ head of global equity research, meanwhile, said that he is skeptical about both the cost of generative AI and its “ultimate transformative potential.” 

“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said. “People generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks.” He added that Goldman Sachs has tested AI to “update historical data in our company models more quickly than doing so manually, but at six times the cost.” 

Covello then likens the “AI arms race” to “virtual reality, the metaverse, and blockchain,” which are “examples of technologies that saw substantial spend but have few—if any—real world applications today.” 

The Goldman Sachs report comes on the heels of a piece by David Cahn, partner at the venture capital firm Sequoia Capital, which is one of the largest investors in generative AI startups, titled “AI’s $600 Billion Question,” which attempts to analyze how much revenue the AI industry as a whole needs to make in order to simply pay for the processing power and infrastructure costs being spent on AI right now. 

To break even on what they’re spending on AI compute infrastructure, companies need to vastly scale their revenue, which Sequoia argues is not currently happening anywhere near the scale these companies need to break even. OpenAI’s annualized revenue has doubled from $1.6 billion in late 2023 to $3.4 billion, but Sequoia’s Cahn asks in his piece: “Outside of ChatGPT, how many AI products are consumers really using today? Consider how much value you get from Netflix for $15.49/month or Spotify for $11.99. Long term, AI companies will need to deliver significant value for consumers to continue opening their wallets.”

This is all to say that journalists, artists, workers, and even people who use generative AI are not the only ones who are skeptical about the transformative potential of it. The very financial institutions that have funded and invested in the AI frenzy, and are responsible for billions of dollars in investment decisions are starting to wonder what this is all for.



Read the whole story
vitormazzi
191 days ago
reply
Brasil
Share this story
Delete
1 public comment
tante
192 days ago
reply
"What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all."
Berlin/Germany

Quoting Jim Covello, Goldman Sachs

1 Share

My main concern is that the substantial cost to develop and run Al technology means that Al applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment.

We estimate that the Al infrastructure buildout will cost over $1tn in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1tn problem will Al solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I've witnessed in my thirty years of closely following the tech industry.

Jim Covello, Goldman Sachs

Tags: ai, generative-ai

Read the whole story
vitormazzi
191 days ago
reply
Brasil
Share this story
Delete

You can't side-quest a product

1 Share
Here's a trap that talented engineers fall into all the time. It creates frustration, burnout, and the genre of tweets that read like "Why don't people care about the amazing work I'm doing".
Read the whole story
vitormazzi
195 days ago
reply
Brasil
Share this story
Delete

Trust

1 Comment and 3 Shares

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

Read the whole story
vitormazzi
235 days ago
reply
Brasil
Share this story
Delete
1 public comment
LeMadChef
224 days ago
reply
The office off the bathroom is crepypasta.
Denver, CO
Next Page of Stories