<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:media="http://search.yahoo.com/mrss/"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/">
  <channel>
    <title>Methodology | Irene Burresi</title>
    <link>https://ireneburresi.dev/</link>
    <description>Work methodologies for AI professionals: research workflow, knowledge management, productivity, and operational tools.</description>
    <language>en-US</language>
    <copyright>© 2026 Irene Burresi · CC-BY-4.0</copyright>
    <managingEditor>Irene Burresi</managingEditor>
    <webMaster>Irene Burresi</webMaster>
    <generator>Astro Feed Engine</generator>
    <docs>https://www.rssboard.org/rss-specification</docs>
    <ttl>360</ttl>
    <lastBuildDate>Sun, 15 Mar 2026 16:26:21 GMT</lastBuildDate>
    <pubDate>Sat, 20 Dec 2025 00:00:00 GMT</pubDate>
    <atom:link href="https://ireneburresi.dev/en/methodology/rss.xml" rel="self" type="application/rss+xml"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <item>
      <title>You&apos;re Measuring AI Wrong</title>
      <link>https://ireneburresi.dev/en/blog/business/misurare-ia/</link>
      <guid isPermaLink="true">https://ireneburresi.dev/en/blog/business/misurare-ia/</guid>
      <pubDate>Sat, 20 Dec 2025 00:00:00 GMT</pubDate>
      <dc:creator>Irene Burresi</dc:creator>
      <dc:language>en</dc:language>
      <description><![CDATA[<p>60% of managers mismeasure AI because they track hours saved, not impact. Segment by role, separate augmentative from substitutive use, and monitor weekly.</p>]]></description>
      <content:encoded><![CDATA[<h2>The measurement paradox</h2>
<p><em>60% of managers admit they need better KPIs for AI. Only 34% are doing anything about it. Meanwhile, the data that actually matters already exists, but nobody’s looking at it.</em></p>
<p><strong>TL;DR:</strong> Companies measure activity (hours saved, tasks automated) instead of impact. A Stanford paper analyzing 25 million workers shows what to do instead: segment by role and seniority, distinguish substitutive from augmentative use, use control groups, monitor in real time. Those who adopt these principles will have an information advantage over those still tracking vanity metrics.</p>
<hr />
<p>The 2025 AI adoption reports tell a strange story. On one hand, companies claim to measure everything: completed deployments, hours saved, tickets handled, costs reduced. On the other, <a href="https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/generative-ai-shows-rapid-growth-but-yields-mixed-results">42% are abandoning most of their AI projects</a>, more than double the previous year. According to <a href="https://projectnanda.org/">MIT NANDA</a>, <strong>95% of pilot projects</strong> generate no measurable impact on the bottom line.</p>
<p>If we measure so much, why do we fail so often?</p>
<p>The problem is we’re measuring the wrong things. Typical enterprise AI metrics (time saved per task, volume of automated interactions, cost per query) capture activity, not impact. They tell you whether the system works technically, not whether it’s creating or destroying value.</p>
<p>A paper published in August 2025 by Stanford’s Digital Economy Lab offers a different approach to what it means to truly measure AI. And the implications for those managing technology investments are concrete.</p>
<hr />
<h2>The vanity metrics problem</h2>
<p>Most corporate AI dashboards track variants of the same metrics: how many requests processed, how much time saved per interaction, what percentage of tasks automated. These are numbers that grow easily and look good in slides. Their flaw is fundamental: they say nothing about real business impact.</p>
<p>A chatbot handling 10,000 tickets per month looks like a success. But if those tickets still require human escalation 40% of the time, if customer satisfaction has dropped, if your most profitable customers are migrating to competitors, the number of tickets handled captures none of this.</p>
<p>The S&amp;P Global 2025 report documents exactly this pattern: companies that accumulated “deployments” and “completed experiments” only to discover, months later, that ROI wasn’t materializing. Costs were real and immediate; benefits were vague and perpetually deferred to next quarter.</p>
<p>According to an MIT Sloan analysis, <strong>60% of managers recognize they need better KPIs</strong> for AI. But only 34% are actually using AI to create new performance indicators. The majority continues using the same metrics they used for traditional IT projects, metrics designed for deterministic software, not for probabilistic systems interacting with complex human processes.</p>
<hr />
<h2>What serious measurement looks like</h2>
<p><a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/">“Canaries in the Coal Mine”</a>, the paper by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen published by Stanford’s Digital Economy Lab, isn’t about how companies should measure AI. It’s about how AI is changing the labor market. But the method it uses is exactly what’s missing from most enterprise evaluations.</p>
<p>The authors obtained access to ADP payroll data, the largest payroll processor in the United States, with monthly records of over 25 million workers. Not surveys, not self-reports, not estimates: granular administrative data on who gets hired, who leaves, how much they earn, in which role, at which company.</p>
<p>They then cross-referenced this data with two AI exposure metrics: one based on theoretical task analysis (which jobs are technically automatable) and one based on actual usage data (how people actually use Claude, Anthropic’s model, in daily work).</p>
<p>The result is an X-ray of AI’s impact with unprecedented granularity. Not the generic “AI is changing work” but precise numbers: employment for software developers aged 22-25 dropped <strong>20% from the late 2022 peak</strong>, while for those over 35 in the same roles it grew 8%. In professions where AI use is predominantly substitutive, young workers lose employment; where it’s predominantly augmentative, there’s no decline.</p>
<p>This type of measurement should inform corporate AI decisions. Not because companies need to replicate this exact study, but because it illustrates three principles that most enterprise metrics ignore entirely.</p>
<hr />
<h2>Measure differential effects, not averages</h2>
<p>Aggregate data hides more than it reveals. If you only measure “hours saved by AI,” you don’t see who’s saving those hours and who’s losing their job. If you only measure “tickets automated,” you don’t see which customers are receiving worse service.</p>
<p>The Stanford paper shows that AI’s impact differs radically by age group. Workers aged 22-25 in exposed professions saw a 13% employment decline relative to colleagues in less exposed roles. Workers over 30 in the same professions saw growth. The average effect is nearly zero, but the real effect is massive redistribution.</p>
<p>For a CFO, aggregate productivity metrics can mask hidden costs. If AI is increasing output from the senior team while making it impossible to hire and train juniors, the short-term gain could transform into a talent pipeline problem in the medium term. The paper calls it the <em>“apprenticeship paradox”</em>: companies stop hiring entry-level workers because AI handles those tasks better, but without entry-level today there won’t be seniors tomorrow.</p>
<p>The operational consequence is that every AI dashboard should segment impact by role, seniority, team, and customer type. A single “productivity” number is almost always misleading.</p>
<hr />
<h2>Distinguish substitutive from augmentative use</h2>
<p>One of the paper’s most relevant findings concerns the difference between substitutive and augmentative AI use. The authors used Anthropic’s data to classify how people actually use language models: to generate final outputs (substitution) or to iterate, learn, and validate (augmentation).</p>
<p>In professions where use is predominantly substitutive, youth employment has collapsed. Where use is predominantly augmentative, there’s no decline; in fact, some of these categories show above-average growth.</p>
<p>Not all “deployments” are equal. A system that automatically generates financial reports substitutes human labor differently from one that helps analysts explore scenarios. Metrics should capture this distinction: classify each AI application as predominantly substitutive or augmentative, separately track impact on headcount, skill mix, and internal training capacity. Augmentative systems might have less immediate ROI but more sustainable effects.</p>
<hr />
<h2>Control for external shocks</h2>
<p>One of the Stanford paper’s most sophisticated methodological aspects is its use of firm-time fixed effects. In practice, the authors compare workers within the same company in the same month, thus isolating the AI exposure effect from any other factor affecting the company: budget cuts, sector slowdown, strategy changes.</p>
<p>The result: even controlling for all these factors, young workers in AI-exposed roles show a relative decline of <strong>16%</strong> compared to colleagues in non-exposed roles at the same company.</p>
<p>This kind of rigor is rare in corporate evaluations. When an AI project launches and costs drop, it’s easy to credit the AI. But maybe costs would have dropped anyway due to seasonal factors. Maybe the team was already optimizing before the launch. Maybe the comparison is with an anomalous period.</p>
<p>The solution is to define baselines and control groups before launch. Don’t compare “before vs after” but “treated vs untreated” in the same period. Use A/B tests where possible, or at least comparisons with teams, regions, or segments that haven’t adopted AI.</p>
<hr />
<h2>Toward high-frequency economic dashboards</h2>
<p>In his predictions for 2026, Brynjolfsson proposed the idea of <em>“AI economic dashboards”</em>, tools that track AI’s economic impact in near real-time, updated monthly instead of with the typical delays of official statistics.</p>
<p>It’s an ambitious proposal at the macro level. But the underlying logic is applicable at the company level: stop waiting for quarterly reports to understand if AI is working and instead build continuous monitoring systems that capture effects as they emerge.</p>
<p>Most AI projects are evaluated like traditional investments: ex-ante business case, periodic reviews, final post-mortem. But AI doesn’t behave like a traditional asset. Its effects are distributed, emergent, often unexpected. A continuous monitoring system can catch drift before it becomes a problem.</p>
<p>In practice, this means working with real-time data instead of retrospective data. If the payroll system can tell you today how many people were hired yesterday in each role, you can track AI’s effect on headcount with a lag of days, not months. The same applies to tickets handled, sales closed, errors detected.</p>
<p>Another key principle: favor leading metrics over lagging ones. The actual utilization rate (how many employees actually use the AI tool every day) is a leading indicator. If it drops, there are problems before they show up in productivity numbers.</p>
<p>As the Stanford paper segments by age, corporate dashboards should segment by role, tenure, and prior performance. AI might help top performers while harming others, or vice versa.</p>
<p>Internal comparisons are also essential: teams that adopted AI vs teams that didn’t, periods with the feature active vs periods with it deactivated. These comparisons are more informative than pure time trends.</p>
<hr />
<h2>The cost of not measuring</h2>
<p>There’s a direct economic argument for investing in better measurement. The 42% of companies that abandoned AI projects in 2025 spent budget, time, and management attention only to get nothing. With better metrics, some of those projects would have been stopped earlier. Others would have been corrected mid-course. Others still would never have started.</p>
<p>The MIT NANDA report estimates that companies are spending <strong>$30-40 billion per year</strong> on generative AI. If 95% generates no measurable ROI, we’re talking about tens of billions burned. Not because the technology doesn’t work, but because it’s applied poorly, measured worse, and therefore never corrected.</p>
<p>The Brynjolfsson paper offers a model of what AI measurement could be. Administrative data instead of surveys. Demographic granularity instead of aggregate averages. Rigorous controls instead of naive comparisons. Continuous monitoring instead of point-in-time evaluations.</p>
<p>No company has Stanford’s resources or access to ADP’s data. But the principles are transferable: segment, distinguish substitutive from augmentative use, control for confounding factors, monitor in real time. Those who adopt these principles will have an information advantage over those who continue tracking deployments and hours saved.</p>
<hr />
<h2>Sources</h2>
<p>Brynjolfsson, E., Chandar, B., &amp; Chen, R. (2025). <a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/"><em>Canaries in the Coal Mine: Six Facts about the Recent Employment Effects of AI</em></a>. Stanford Digital Economy Lab.</p>
<p>Deloitte AI Institute. (2025). <a href="https://www2.deloitte.com/us/en/pages/consulting/articles/state-of-generative-ai-in-enterprise.html"><em>State of Generative AI in the Enterprise</em></a>. Deloitte.</p>
<p>MIT Project NANDA. (2025). <a href="https://projectnanda.org/"><em>The GenAI Divide 2025</em></a>. Massachusetts Institute of Technology.</p>
<p>MIT Sloan Management Review. (2024). <a href="https://sloanreview.mit.edu/projects/the-future-of-strategic-measurement-enhancing-kpis-with-ai/"><em>The Future of Strategic Measurement: Enhancing KPIs With AI</em></a>. MIT Sloan.</p>
<p>S&amp;P Global Market Intelligence. (2025, October). <a href="https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/generative-ai-shows-rapid-growth-but-yields-mixed-results"><em>Generative AI Shows Rapid Growth but Yields Mixed Results</em></a>. S&amp;P Global.</p>
]]></content:encoded>
      <category>Business</category>
      <category>Research</category>
      <category>Methodology</category>
      <category>KPI</category>
      <category>Metrics</category>
      <category>AI Measurement</category>
      <category>Enterprise AI</category>
      <category>ROI</category>
      <atom:link rel="alternate" hreflang="it" href="https://ireneburresi.dev/blog/business/misurare-ia/"/>
    </item>
    <item>
      <title>Why replacing people doesn&apos;t fix the team</title>
      <link>https://ireneburresi.dev/en/blog/methodology/debugging-organizzativo-hackman/</link>
      <guid isPermaLink="true">https://ireneburresi.dev/en/blog/methodology/debugging-organizzativo-hackman/</guid>
      <pubDate>Sat, 22 Mar 2025 00:00:00 GMT</pubDate>
      <dc:creator>Irene Burresi</dc:creator>
      <dc:language>en</dc:language>
      <description><![CDATA[<p>Not "who isn't working" but "what in the structure isn't working." Six misdiagnoses flipped using Hackman's framework. The first debug is on the structure.</p>]]></description>
      <content:encoded><![CDATA[<p>Marco’s team isn’t working. Everyone knows it. The diagnosis comes fast: “Marco isn’t motivated.” “Sara doesn’t communicate.” The plan: a one-on-one with Marco to figure out what’s blocking him, a team building workshop, and if things don’t improve, swap someone out.</p>
<p>Three months later, Marco is gone. Luca replaced him. The team still isn’t working.</p>
<p>The scene keeps repeating because the diagnosis is wrong. Not wrong in an obvious way — wrong in a plausible way, which is worse. “Marco isn’t motivated” sounds reasonable. Everyone nods. Too bad the problem isn’t Marco.</p>
<p>Social psychology has a name for this: <em>fundamental attribution error</em>. Ross described it in 1977: when we observe behavior, we tend to attribute it to personal traits (laziness, poor collaboration) while underestimating the weight of the situation the person is in.</p>
<p>J. Richard Hackman spent forty years studying teams: flight crews, orchestras, intelligence squads. The bottom line of his research: structural conditions explain up to 80% of variance in team effectiveness (<a href="https://doi.org/10.1177/0021886305281984">Wageman, Hackman &amp; Lehman, 2005</a>). When a team isn’t working, the most likely problem isn’t who’s in it. It’s how it was designed.</p>
<p>Before replacing people, check the conditions.</p>
<hr />
<h2>The five conditions: a checklist, not a theory</h2>
<p>Hackman didn’t produce an abstract model. He produced a checklist. Five conditions that, when present, make it likely a team will work. When absent, make it likely it won’t. He validated them empirically across hundreds of teams in different contexts. They work as a diagnostic tool long before they work as theory.</p>
<p>Here’s a quick translation into software context. For the full picture, start with the <a href="https://ireneburresi.dev/blog/methodology/hackman-real-team-software/">previous piece on Hackman’s team vs. working group distinction</a>.</p>
<p><strong>1. Being a real team.</strong> Clear boundaries, stable composition, real interdependence. If even one of these three is missing, you don’t have a team. You have a group of people with a shared manager.</p>
<p><strong>2. A compelling direction.</strong> A clear, challenging, meaningful objective. Not “close the sprint stories” — that’s a unit of measurement, not a direction. A compelling direction answers the question: why does this work matter?</p>
<p><strong>3. An enabling structure.</strong> Roles, norms, skills. The minimum infrastructure so people don’t have to renegotiate everything from scratch every week.</p>
<p><strong>4. A supportive organizational context.</strong> Does the team have the information it needs? The resources? Does the reward system incentivize group outcomes or individual performance? This is the condition the team can’t give itself. It depends on the organization around it.</p>
<p><strong>5. Competent process coaching.</strong> Not technical mentoring: someone who helps the team work better <em>together</em>. How they make decisions, handle disagreements, distribute work. It’s the condition that matters least of the five (the 10% in Hackman’s 60/30/10), but it’s the one everyone focuses on.</p>
<p>The instinct when managing teams is to start at the bottom: coaching, facilitation, interpersonal dynamics. Hackman says: start at the top.</p>
<hr />
<h2>The table that flips the diagnosis</h2>
<p>This is the core of the argument. Six phrases I hear repeated constantly. For each one: the usual diagnosis, the most likely structural cause, and where to look instead.</p>
<p>Some of these mappings are directly anchored to Hackman’s research. Others are my translation into the software context, and I flag them as such.</p>
<h3>“Marco isn’t motivated”</h3>
<p>It’s Marco’s problem, people say. He lacks drive, ownership. Maybe he’s not right for the role.</p>
<p>More likely, the team lacks a <strong>compelling direction</strong> (condition 2). If the team’s objective is vague, purely administrative, or changes every two weeks, motivation isn’t a trait of Marco’s. It’s a rational response to a context that gives no reason to invest energy. I’ve seen brilliant developers seem disengaged because their team’s mandate was “support the product” — which in practice meant answering tickets endlessly with no vision of where things were heading.</p>
<p>Don’t look at Marco. Look at the team’s mandate. If you can’t explain in one sentence why the work matters, that’s your problem.</p>
<h3>“Sara doesn’t communicate”</h3>
<p>Where to look here is at work design. If everyone works on a separate feature and interactions are limited to the occasional comment on a pull request, there’s no structural reason to communicate more. Sara isn’t uncollaborative. She’s rational: why would she update others on work that doesn’t affect them?</p>
<p>The cause is a lack of <strong>real interdependence</strong> (condition 1). You can run all the communication workshops you want: if the work structure doesn’t create mutual dependencies, communication will remain an empty ritual.</p>
<h3>“The team doesn’t trust each other”</h3>
<p>The classic response: team building activities, vulnerability exercises, an offsite to get to know each other better.</p>
<p>Trust in a team doesn’t come from an afternoon workshop. It comes from working together long enough to predict how the other person will behave. Hackman saw this with flight crews: crews that had flown together for a while made fewer errors than newly formed ones, even with less experienced pilots. The most likely problem is <strong>unstable composition</strong> (condition 1). If you rotate people between teams every quarter, you restart from zero each time.</p>
<p>How many times has the team’s composition changed in the past year? If the answer is “often,” the trust deficit isn’t a relationship problem. It’s structural turnover.</p>
<h3>“Retros don’t produce anything”</h3>
<p>Here the question comes before the answer: is there a shared process to improve? Retrospectives assume a team that collaborates on a common outcome and wants to improve how they do it. If everyone has their own workflow, priorities, and blockers, the retro has no object. The format is irrelevant. The cause is that the group isn’t a <strong>real team</strong> (condition 1).</p>
<h3>“The lead can’t guide the team”</h3>
<p>The usual diagnosis: they lack soft skills. Send them to a leadership course.</p>
<p>The lead might be excellent. But if the team doesn’t have access to the information it needs, if resources get cut without notice, if the evaluation system rewards individual performance and ignores group outcomes, the lead is in an impossible position. It’s like asking a pilot to land well with a poorly designed airplane — you can send them to every flight course there is, but if the flaps don’t work, the problem isn’t their technique.</p>
<p>The cause is the <strong>organizational context</strong> (condition 4). Does the lead have the authority to make decisions? Are priorities stable enough to allow a plan? If not, the leverage is in the organization, not the person.</p>
<h3>“Too much conflict”</h3>
<p>Here the answer is less straightforward. Conflict in a team can be a good sign: if the team is real and negotiating norms for working together (condition 3), some friction is physiological. Hackman says it clearly: the best teams aren’t conflict-free. They’re teams that have learned to manage conflict.</p>
<p>But conflict can also signal <strong>unclear boundaries</strong> (condition 1): who decides what? Who has authority over which part of the system? If two people both think they’re responsible for the same area, the conflict isn’t relational. It’s structural.</p>
<p>The useful distinction: if the conflict is about <em>how</em> to do things, it’s probably healthy. If it’s about <em>who</em> should do what, it’s a boundary problem. And if it’s chronic despite everyone’s best intentions, it’s almost certainly not a people problem.</p>
<hr />
<h2>Why we keep getting the diagnosis wrong</h2>
<p>If structural conditions matter this much, why does the default remain “someone’s fault”?</p>
<p>The first reason is that <strong>structure is invisible</strong>. You see people. You see behaviors. You don’t see the conditions in which those behaviors emerge. Hackman pressed this point in the last paper of his career, “From causes to conditions” (<a href="https://onlinelibrary.wiley.com/doi/full/10.1002/job.1774">Hackman, 2012</a>): research on teams has focused too much on internal causes (who does what, how they behave) and too little on external conditions (how the team is designed, what context it operates in). The same applies to people managing teams in practice.</p>
<p>The second is that <strong>replacing people feels easier</strong>. Giving Marco feedback, sending Sara to a workshop, swapping the lead: these are all actions a manager can take tomorrow morning. Redesigning the team’s mandate, stabilizing composition, changing the incentive system — those require time, authority, and often negotiation with someone above. The interpersonal diagnosis is attractive because it has solutions at hand. Too bad they’re the wrong solutions.</p>
<p>The third is more insidious, and I’ve seen it up close: <strong>the organization incentivizes individual diagnosis</strong>. Performance reviews evaluate people, not conditions. PIPs apply to people. “Cultural fit” is an attribute of people. The entire management apparatus is built around the idea that performance is an individual trait. Admitting the problem is structural means admitting the system is poorly designed — and the person who designed the system is often the one making the diagnosis.</p>
<hr />
<p>Next time someone says “the problem is Marco,” pause for a moment. Not because Marco is necessarily innocent: sometimes the problem really is the person. But it’s less common than we think, and the interpersonal diagnosis is so intuitive we always get there first.</p>
<p>Try reframing. Not “who isn’t working?” but “what in the structure isn’t working?” Move from person to condition. Then ask yourself: is this condition under my control? If yes, that’s where the energy goes. If not, at least you know that no team building workshop will fix it.</p>
<p>It’s not a small difference. It’s the difference between debugging the code and debugging the compiler.</p>
<hr />
<h2>Sources</h2>
<ul>
<li>Hackman, J.R. (2002). <em>Leading Teams: Setting the Stage for Great Performances</em>. Harvard Business School Press.</li>
<li>Hackman, J.R. (2011). <em>Collaborative Intelligence: Using Teams to Solve Hard Problems</em>. Berrett-Koehler.</li>
<li>Hackman, J.R. (2012). <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/job.1774">From causes to conditions in group research</a>. <em>Journal of Organizational Behavior</em>, 33, 428-444.</li>
<li>Wageman, R., Hackman, J.R. &amp; Lehman, E.V. (2005). <a href="https://doi.org/10.1177/0021886305281984">Team Diagnostic Survey: Development of an Instrument</a>. <em>Journal of Applied Behavioral Science</em>, 41(4), 373-398.</li>
<li>Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), <em>Advances in Experimental Social Psychology</em> (Vol. 10, pp. 173-220). Academic Press.</li>
</ul>
]]></content:encoded>
      <category>Methodology</category>
      <category>Business</category>
      <category>Team Design</category>
      <category>Hackman</category>
      <category>Organizational Design</category>
      <category>Team Effectiveness</category>
      <category>Fundamental Attribution Error</category>
      <category>Diagnostics</category>
      <atom:link rel="alternate" hreflang="it" href="https://ireneburresi.dev/blog/methodology/debugging-organizzativo-hackman/"/>
    </item>
    <item>
      <title>Frustrated with Agile? Maybe your team isn&apos;t actually a team</title>
      <link>https://ireneburresi.dev/en/blog/methodology/hackman-real-team-software/</link>
      <guid isPermaLink="true">https://ireneburresi.dev/en/blog/methodology/hackman-real-team-software/</guid>
      <pubDate>Sat, 15 Mar 2025 00:00:00 GMT</pubDate>
      <dc:creator>Irene Burresi</dc:creator>
      <dc:language>en</dc:language>
      <description><![CDATA[<p>Hackman's distinction between real teams and working groups explains why standups, retros, and planning feel pointless. The problem isn't Agile — it's a structural mismatch.</p>]]></description>
      <content:encoded><![CDATA[<p>The standup takes twelve minutes. Five people, each reciting their update while staring at some vague point on the screen. Nobody comments on what anyone else says, because there’s no reason to: everyone is working on their own feature, in a different corner of the codebase. The standup ends, everyone goes back to doing exactly what they would have done without it. Then the retro. Three sticky notes come out of it: “improve communication,” same as last month. And sprint planning, which is really just individual task assignment with a two-week timer.</p>
<p>If this sounds familiar, you’re not alone. The <a href="https://digital.ai/resource/state-of-agile-report/">17th State of Agile Report</a> by <a href="https://digital.ai/">Digital.ai</a> (2024) found that only 11% of practitioners report being “very satisfied” with Agile practices in their organization. The frustration runs deep enough that two original signatories of the Agile Manifesto have publicly turned against what Agile has become: Ron Jeffries, co-creator of Extreme Programming, wrote that developers should <a href="https://ronjeffries.com/articles/018-01ff/abandon-1/">abandon Agile</a>, or at least the version organizations have made of it. Dave Thomas, another signatory, declared that <a href="https://pragdave.me/thoughts/active/2014-03-04-time-to-kill-agile.html">Agile is dead</a>, hollowed out by marketing and mass certification.</p>
<p>The most common diagnosis is that Agile has become bureaucracy: too many rituals, too much process, not enough code. It makes sense. Who hasn’t thought that at least once, walking out of yet another endless planning session?</p>
<p>But there’s another possibility. One that has nothing to do with Agile itself, but with something more fundamental: the structure you’re applying it to.</p>
<p>J. Richard Hackman spent 40 years studying real teams: flight crews, orchestras, intelligence teams, surgical teams. His research, condensed in <em>Leading Teams</em> (2002) and <em>Collaborative Intelligence</em> (2011), arrives at a distinction that almost nobody in software makes: the distinction between a <strong>team</strong> and a <strong>working group</strong>. They are different things. They work differently. And they require different tools.</p>
<p>Agile practices are designed for teams. Apply them to a working group and you get exactly the frustration you’re feeling. The problem isn’t the method. It’s a structural mismatch.</p>
<hr />
<h2>The most overused word in software</h2>
<p>In software, “team” is the default word for any group of people working on the same project. Five developers sharing a Jira board? Team. Two backend engineers, a frontend dev, and a designer reporting to the same manager? Team. Eight people across three time zones who meet at the 9 AM standup? Team.</p>
<p>Hackman would disagree. In <em>Leading Teams</em> he defines a “real team” through three minimum properties, all required.</p>
<p>The first is <strong>clear boundaries</strong>: everyone knows who is on the team and who is not. Sounds obvious, but in software practice it isn’t. The designer “shared” across three teams — in or out? The developer “on loan” for two sprints — a member? If you can’t make the list without hesitating, the boundaries aren’t clear.</p>
<p>The second is <strong>stable composition</strong>. People stay the same long enough to develop shared ways of working. Hackman studied flight crews: NASA data, which he reports in <em>Leading Teams</em>, showed that newly formed crews made more errors than those who had been flying together for a while, even when the newer crews had more experienced pilots. In software, quarterly rotation of people between teams destroys this effect. Every time, you start from zero.</p>
<p>The third, and most underrated, is <strong>real interdependence</strong>. The team’s output depends on collaboration between members — it’s not the sum of individual contributions. This is where most software “teams” fall apart. Five developers working on five independent features, with five cross-reviews done as a formality, are not interdependent. One person’s work doesn’t change another’s. If you removed all the meetings and put them in separate rooms, the result would be the same.</p>
<p>The test is brutal in its simplicity: if you eliminated standups, retros, and planning tomorrow, and everyone worked on their own, would the final product suffer? If the answer is no — if the result would be identical, maybe even faster without the interruptions — then what you have is not a team. It’s a group of individuals with a shared manager.</p>
<p>That’s not an insult. It’s a diagnosis. And the diagnosis is the first step toward stopping the use of the wrong tools.</p>
<hr />
<h2>Structural conditions: the 60/30/10</h2>
<p>Hackman didn’t stop at the definition. He studied what makes a team effective, and the answer is less intuitive than you’d think.</p>
<p>Research by Hackman and his collaborator Ruth Wageman identifies five conditions that enable team performance: being a real team, having a compelling direction, an enabling structure, a supportive organizational context, and competent coaching. I won’t go deep on all of them here — each deserves its own article. The relevant point is different: how much do these conditions matter compared to what a leader does day to day?</p>
<p>Hackman’s answer, laid out in <em>Collaborative Intelligence</em> (2011), is the <strong>60/30/10</strong>: 60% of a team’s effectiveness depends on design — the structural conditions put in place before the team starts working. 30% depends on launch — how the team is kicked off, the first days, the initial norms. The remaining 10% on ongoing coaching.</p>
<p>A clarification: Hackman presents this split as a “best estimate,” not as the result of a single study yielding those exact percentages. It’s a heuristic that synthesizes decades of research. Not a precise data point.</p>
<p>But the order of magnitude has solid empirical grounding. The Team Diagnostic Survey (TDS), developed by Wageman, Hackman, and Lehman and published in 2005, was administered to 2,474 people across 321 teams. The finding: structural conditions explain up to 80% of the variance in team effectiveness (<a href="https://doi.org/10.1177/0021886305281984">Wageman, Hackman &amp; Lehman, 2005</a>). Not 20%. Not 50%. Eighty percent.</p>
<p>An earlier study by Wageman (2001) on <a href="https://doi.org/10.1287/orsc.12.5.559.10094">43 self-managing teams at Xerox</a> had already shown the same pattern: a leader’s design activities influenced team performance. Day-to-day coaching activities did not.</p>
<p>The implication for software is direct. When a team isn’t working, the instinctive reaction is to work on dynamics: facilitate retros better, improve communication, do team building. Hackman’s framework suggests the most powerful lever is upstream. Who is on the team? What is the mandate? How is the work designed? Does the organizational context support it? If these conditions aren’t there, no amount of facilitation will compensate.</p>
<p>But the first of those five conditions — “being a real team” — opens a question that almost nobody in software asks.</p>
<hr />
<h2>Team or working group: the distinction that changes everything</h2>
<p>A point Hackman makes often, and that gets misunderstood just as often: the distinction between team and working group is not a value judgment. It’s not “team = good, working group = bad.” They are two different organizational modes, each with its own strengths.</p>
<p>A <strong>working group</strong> is a set of people reporting to the same manager who may coordinate, but whose output is primarily individual. Everyone has their own goals, responsibilities, deliverables. The manager coordinates, assigns, removes obstacles. Deep interdependence is not required.</p>
<p>A <strong>team</strong> produces collective output. The result cannot be decomposed into the sum of individual contributions: it requires continuous collaboration, shared decisions, mutual adjustment. The cost is higher — you need stable boundaries, shared norms, time to develop ways of working together. But for certain kinds of problems, it’s the only configuration that works.</p>
<p>The damage comes from confusing the two.</p>
<p>A working group managed as a team generates overhead without benefit. Coordination meetings exist, but there’s nothing substantial to coordinate. Retrospectives don’t produce actions because there’s no shared process to improve: everyone has their own workflow, priorities, blockers. The Agile ceremony becomes a fixed cost on work that doesn’t require it.</p>
<p>The damage goes both ways, though. A real team managed as a working group is equally dysfunctional. If you have people who need to collaborate on a complex problem and treat them as individual contributors — assigning separate tasks, evaluating them individually, without protecting time for joint work — you’re undermining the one thing that makes that team effective: interdependence.</p>
<p>I’ve seen both scenarios. The first is more common in software, because the default organization tends to be the working group (developers assigned to individual features), while the process infrastructure is almost always that of a team (Scrum, Kanban with standups, retros, planning).</p>
<p>The operational question isn’t “how do I improve my team?” It’s more fundamental: is what I’m leading a team or a working group? The answer changes everything that follows.</p>
<hr />
<h2>The Agile mismatch: right tools, wrong structure</h2>
<p>Back to the frustration we started with. Agile practices — Scrum in particular — didn’t emerge in a vacuum. They’re designed around a specific assumption: that a small group of people works interdependently toward a shared goal, iterating together. The sprint assumes a common goal. The standup assumes that knowing what others are doing changes your work. The retro assumes a shared process to inspect. Planning assumes collective prioritization decisions.</p>
<p>These are all tools that assume interdependence. Without it, they lose their point.</p>
<p>Now look at the typical “team” setup in many software organizations. People are assigned — often rotated — to a “team” that is really an organizational container. Everyone works on their own user story, with interactions limited to code review and the occasional Slack question. The “sprint goal” is the sum of individual stories. Interdependence is minimal or absent.</p>
<p>Apply team tools to this structure and the result is predictable. The standup becomes a round of updates nobody listens to. The retro produces generic complaints. Planning becomes bureaucracy. Not because Scrum is bureaucracy — but because you’re using it on a structure it wasn’t designed for.</p>
<p>The Manifesto signatories say as much, in different words. Jeffries talks about “Dark Scrum” — organizations using Agile rituals as control mechanisms, draining them of collaboration. Thomas says “Agile” was turned into a commercial noun when it was meant to be an adjective describing a way of working. Their critique is legitimate. But the diagnosis they offer — “organizations have corrupted Agile” — is incomplete. Hackman’s framework provides a more structural one: many organizations haven’t corrupted Agile. They’ve applied it to structures that aren’t teams.</p>
<p>I bring this up because I’ve seen it happen more times than I’d like to admit, including in contexts with competent people and good intentions. The standard response to the malaise was always the same: change the facilitator, try a different retro format, add a ceremony. Hackman’s framework gave me the vocabulary for what I felt but couldn’t articulate: the problem wasn’t the execution. It was the premise.</p>
<p>This changes the diagnosis and the solutions. If the problem is “Agile has become bureaucracy,” the answer is less process, fewer rituals, more autonomy. Sometimes that’s right. But if the problem is a structural mismatch, the answer is different: either transform the structure into a real team — with the costs that entails — or accept that you have a working group and adopt tools consistent with that reality. Both options are legitimate. What doesn’t work is staying in the middle.</p>
<hr />
<p>Back to the twelve-minute standup we started with. Five people, five updates, no interaction. The obvious diagnosis is that the standup is poorly facilitated, or that Scrum doesn’t work, or that the team needs to “work on communication.” These are all responses that address the symptom.</p>
<p>Hackman’s question is different, and it comes first: do those five people need to talk to each other every morning to do their work? If each person’s work doesn’t depend on the others’, the standup isn’t poorly facilitated. It’s useless by design. The retro doesn’t produce actions because there’s no shared process to improve. Planning is bureaucracy because there are no collective decisions to make.</p>
<p>It’s not a question of execution. It’s a question of structure.</p>
<p>Next time you think “Agile doesn’t work,” try reframing. Don’t ask how to improve the process. Ask whether what you’re leading is a team or a working group. The answer isn’t a judgment. It’s a diagnosis. And from the diagnosis, everything else follows.</p>
<hr />
<h2>Sources</h2>
<ul>
<li>Hackman, J.R. (2002). <em>Leading Teams: Setting the Stage for Great Performances</em>. Harvard Business School Press.</li>
<li>Hackman, J.R. (2011). <em>Collaborative Intelligence: Using Teams to Solve Hard Problems</em>. Berrett-Koehler.</li>
<li>Wageman, R. (2001). <a href="https://doi.org/10.1287/orsc.12.5.559.10094">How Leaders Foster Self-Managing Team Effectiveness: Design Choices Versus Hands-on Coaching</a>. <em>Organization Science</em>, 12(5), 559-577.</li>
<li>Wageman, R., Hackman, J.R. &amp; Lehman, E.V. (2005). <a href="https://doi.org/10.1177/0021886305281984">Team Diagnostic Survey: Development of an Instrument</a>. <em>Journal of Applied Behavioral Science</em>, 41(4), 373-398.</li>
<li><a href="https://digital.ai/">Digital.ai</a> (2024). <a href="https://digital.ai/resource/state-of-agile-report/">17th State of Agile Report</a>.</li>
<li>Jeffries, R. (2018). <a href="https://ronjeffries.com/articles/018-01ff/abandon-1/">Developers Should Abandon Agile</a>.</li>
<li>Thomas, D. (2014). <a href="https://pragdave.me/thoughts/active/2014-03-04-time-to-kill-agile.html">Agile is Dead (Long Live Agility)</a>.</li>
</ul>
]]></content:encoded>
      <category>Methodology</category>
      <category>Business</category>
      <category>Team Design</category>
      <category>Agile</category>
      <category>Hackman</category>
      <category>Team vs Working Group</category>
      <category>Scrum</category>
      <category>60-30-10</category>
      <atom:link rel="alternate" hreflang="it" href="https://ireneburresi.dev/blog/methodology/hackman-real-team-software/"/>
    </item>
    <item>
      <title>Who Will the Senior Engineers of Tomorrow Be?</title>
      <link>https://ireneburresi.dev/en/blog/business/senior-domani/</link>
      <guid isPermaLink="true">https://ireneburresi.dev/en/blog/business/senior-domani/</guid>
      <pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
      <dc:creator>Irene Burresi</dc:creator>
      <dc:language>en</dc:language>
      <description><![CDATA[<p>Employment for developers under 25 dropped 20% since ChatGPT's launch. Companies hire fewer juniors because AI does those tasks. But without junior developers today, who will lead teams in ten years?</p>]]></description>
      <content:encoded><![CDATA[<h2>The apprenticeship paradox</h2>
<p><em>Companies don’t hire juniors because AI does those tasks better. But without junior developers today, who will lead teams in ten years?</em></p>
<p>There’s a question that rarely appears in quarterly reports: if we stop hiring people who are learning, who will know how to do this job a decade from now?</p>
<p>The numbers tell a story that should concern anyone managing technical teams. Employment for software developers between ages 22 and 25 has dropped 20% from the peak in late 2022, according to a <a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/">paper from Stanford’s Digital Economy Lab</a> based on payroll data from 25 million workers. It’s not a uniform decline: in the same period, employment for those over 35 in the same roles grew 8%.</p>
<p>The mechanism is what we might call the apprenticeship paradox: companies stop hiring entry-level because AI does those tasks better than a recent graduate. But without entry-level workers today, they won’t have senior engineers tomorrow.</p>
<hr />
<h2>The numbers of collapse</h2>
<p>The contraction is not an impression. It’s documented by multiple independent sources.</p>
<p>Entry-level hiring at the top 15 tech companies dropped 25% between 2023 and 2024, according to <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs">SignalFire</a>. Since 2021, the average age of technical hires has increased by three years. Companies aren’t just hiring less: they’re hiring differently, preferring senior profiles who can be productive from day one.</p>
<p>Tech internships have collapsed 30% since 2023, <a href="https://stackoverflow.blog/2025/12/26/ai-vs-gen-z">according to Handshake</a>. Meanwhile, applications have increased 7%. More people competing for fewer positions, and the remaining positions require increasingly prior experience.</p>
<p>A <a href="https://www.finalroundai.com/blog/ai-is-making-it-harder-for-junior-developers-to-get-hired">Harvard study</a> of 285,000 American companies found that when firms adopt generative AI, junior employment drops 9-10% within six quarters. Senior employment remains stable. These aren’t mass layoffs: it’s a silent hiring freeze. Companies simply stop opening entry-level positions.</p>
<p>The pattern repeats in Europe. Junior tech positions have <a href="https://restofworld.org/2025/engineering-graduates-ai-job-losses/">dropped 35%</a> in major EU countries during 2024, based on aggregated data from LinkedIn, Indeed, and Eures. In the UK, the Big Four consulting firms cut graduate hiring between 6% and 29% in two years. In India, IT companies have reduced entry-level roles by 20-25%, according to an EY report.</p>
<p>The World Economic Forum, in its Future of Jobs Report 2025, warns that 40% of employers expect to reduce staffing where AI can automate tasks. And automatable tasks are, almost by definition, the ones junior developers used to do.</p>
<hr />
<h2>The logic of the short term</h2>
<p>The rationale behind these choices is understandable. A senior engineer with AI tools can do what previously required two or three juniors, at least for certain tasks. GitHub Copilot, Cursor, and similar tools promise productivity gains of 20-50% according to their vendors. For a CFO looking at the next quarter, hiring a junior who will need six months of training before being productive seems like a difficult investment to justify.</p>
<p>James O’Brien, a computer science professor at Berkeley who works with startups, <a href="https://sfstandard.com/2025/05/20/silicon-valley-white-collar-recession-entry-level/">describes the shift</a>: “Previously, startups would hire one senior person and two or three early-career coders to assist. Now they ask: why hire a recent graduate when AI is cheaper and faster?”</p>
<p>It’s a reasonable question in the short term. Code generated by AI isn’t top quality, but neither is code written by a recent graduate. The difference, O’Brien notes, is that the iterative process to improve AI code takes minutes. A junior might take days for the same task.</p>
<p>Heather Doshay, head of talent at SignalFire, sums it up: “Nobody has the patience or time for hand-holding in this new environment, where much of the work can be done autonomously by AI.”</p>
<hr />
<h2>The problem nobody calculates</h2>
<p>There’s a flaw in this logic, and it’s called the talent pipeline.</p>
<p>Matt Garman, CEO of AWS, <a href="https://www.finalroundai.com/blog/aws-ceo-ai-cannot-replace-junior-developers">said it explicitly</a>: “If you don’t have a talent pipeline you’re building, if you don’t have junior people you’re mentoring and growing in the company, we often find that’s where the best ideas come from. If a company stops hiring juniors and developing them, eventually the whole system falls apart.”</p>
<p>It’s not rhetoric. It’s demographic mathematics applied to organizations. Every senior engineer, every tech lead, every CTO was once a junior. The path from recent graduate to technical leader requires years of experience on real projects, mistakes made and corrected, feedback received, patterns internalized. There is no shortcut.</p>
<p>If the industry stops hiring juniors in 2023, by 2033 it will have a structural shortage of mid-level talent. By 2038, there will be a shortage of senior engineers. By 2043, there will be no one to promote to technical leadership roles.</p>
<p>The problem is that this cost doesn’t appear in any quarterly balance sheet. It’s an invisible debt that accumulates silently, and when it becomes obvious, it will be too late to remedy quickly.</p>
<hr />
<h2>AI that teaches and AI that atrophies</h2>
<p>There’s a further irony in this situation. The same AI tools that are eliminating junior roles could, in theory, accelerate learning. An AI tutor available 24/7, patient, answering every question: it sounds like every student’s dream.</p>
<p>The reality is more complicated.</p>
<p>An experiment conducted by Wharton and Penn researchers on nearly a thousand high school math students tested two versions of a GPT-4-based tutor. The group with access to a ChatGPT-like interface (GPT Base) achieved 48% better results during assisted practice sessions. The group with a tutor designed to guide without giving direct answers (GPT Tutor) achieved 127% better results.</p>
<p>But here’s the point: when AI was removed and students took the exam on their own, the GPT Base group achieved 17% worse results than the control group who never used AI. The GPT Tutor group, by contrast, achieved results similar to control.</p>
<p>Students were using AI as a crutch. They performed better with assistance but learned less. When the assistance was removed, they found themselves worse off than those who never had it.</p>
<p>A <a href="https://time.com/7295195/ai-chatgpt-google-learning-school/">study from MIT Media Lab</a> documented what researchers call “cognitive debt”: using LLMs for writing seems to reduce mental effort during the task, but at the cost of more superficial learning. Researcher Nataliya Kosmyna expressed concern about developing brains: “Developing brains are the ones at highest risk.”</p>
<p>It doesn’t mean AI can’t help learning. The Wharton study shows it can, if designed with the right safeguards. But “wild” AI, the kind that gives answers instead of guiding toward answers, can do damage.</p>
<hr />
<h2>The new profile of the junior</h2>
<p>If fewer juniors will be hired, what characteristics must they have to be hired?</p>
<p>Market signals are clear. It’s no longer enough to know how to code. Employers <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs">expect</a> recent graduates to be able to manage projects, communicate with clients, understand the software development lifecycle. The “grunt work” that once served as a training ground is being automated. Those entering must be operational at a higher level almost from day one.</p>
<p>Jamie Grant, who manages career services for engineering at the University of Pennsylvania, describes the change: “They’re not necessarily just programming. There’s much more high-level thinking and understanding of the software development lifecycle.”</p>
<p>David Malan of Harvard, who teaches the world’s most-followed introduction to programming course, notes that the biggest impact of AI has been on programmers, not on roles that were expected (like call centers). The reason: programming work is relatively solitary and highly structured, perfect for automation.</p>
<p>But Malan also notes something interesting: in the United States, employment for “programmers” dropped 27.5% between 2023 and 2025, but employment for “software developers,” a more design-oriented position, dropped only 0.3%. The difference is in the level of abstraction. Those who write code are vulnerable. Those who design systems less so.</p>
<hr />
<h2>Three scenarios for the future</h2>
<p><strong>Scenario 1: The collapse of the pipeline</strong></p>
<p>Companies continue not to hire juniors. In five to ten years, the shortage of mid-level talent becomes acute. The remaining senior engineers command astronomical salaries. Companies that can’t afford them lose competitiveness. The industry polarizes between a few giants who can attract talent and everyone else struggling.</p>
<p><strong>Scenario 2: Apprenticeship reinvented</strong></p>
<p>Some companies realize the problem is coming and invest against the trend. They create intensive training programs, perhaps assisted by AI designed to teach instead of replace. They become the preferred employers for top talent, who know they can grow there. In the long term, they have a competitive advantage.</p>
<p><strong>Scenario 3: Uneven democratization</strong></p>
<p>AI lowers the barrier to entry for some skills (writing working code) but raises it for others (designing systems, debugging complex problems, managing AI itself). Those with access to quality training and mentorship can skip some steps. Those without remain stuck. Inequality of opportunity increases.</p>
<p>None of these scenarios is inevitable. They are possibilities that depend on choices companies, educational institutions, and policymakers will make in the coming years.</p>
<hr />
<h2>What those who hire can do</h2>
<p>If you manage a team or influence hiring decisions, some questions deserve reflection.</p>
<p><strong>Are you optimizing for the next quarter or the next ten years?</strong> A junior costs more in the short term. But the alternative is to depend entirely on the external market for talent, competing with everyone else who made the same choice.</p>
<p><strong>Is your team still teaching?</strong> If senior people spend all their time producing and no one teaching, you’re consuming human capital without regenerating it.</p>
<p><strong>How do you use AI in training?</strong> If your juniors use Copilot to get answers instead of learning to find them, you’re accelerating their short-term productivity while compromising their long-term growth.</p>
<p><strong>Are you hiring for today’s skills or tomorrow’s adaptability?</strong> Specific technical skills have an increasingly short half-life. The ability to learn, to reason about new problems, to work with people—those last.</p>
<hr />
<h2>What those starting out can do</h2>
<p>If you’re early in your career in a market that seems to close doors on you, some principles can help.</p>
<p>AI isn’t eliminating all junior work. It’s eliminating repetitive, isolated junior work. The roles that survive require human interaction, judgment about ambiguous problems, creativity applied to specific contexts. Look for those.</p>
<p>Learn to use AI as a tool, not a crutch. The difference between using ChatGPT to get answers and using it to explore problems is the difference between atrophying and growing.</p>
<p>Networking matters more than ever. If junior positions are scarce, competition is fierce, and often the person with a connection wins, not the person with the best CV. It’s not fair, but it’s real.</p>
<p>Cross-functional skills are not optional. Communication, project management, understanding the business: these are things AI can’t do and employers seek even in technical profiles.</p>
<hr />
<h2>The unanswered question</h2>
<p>I return to the initial question: who will the senior engineers of tomorrow be?</p>
<p>I don’t have a certain answer. No one does. We’re conducting a real-time experiment, without a control group, on a global scale.</p>
<p>What I know is that every senior person I know was once a junior who someone decided to hire and train. Every tech lead made beginner mistakes that someone had the patience to correct. Every systems architect wrote embarrassing code before writing elegant code.</p>
<p>If we eliminate that phase, if we treat it as a cost to cut rather than an investment to protect, we’re not optimizing. We’re consuming capital that we don’t know how to regenerate.</p>
<p>The question isn’t whether AI can replace juniors. It can, for many tasks. The question is whether we want an industry that only knows how to consume skills or one that also knows how to produce them.</p>
<p>For now, the numbers suggest we’ve chosen the first option. The bill will come. Not next quarter. But it will come.</p>
<hr />
<h2>Sources</h2>
<p>Brynjolfsson, E., Chandar, B., &amp; Chen, R. (2025). <a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/"><em>Canaries in the Coal Mine: Six Facts about the Recent Employment Effects of AI</em></a>. Stanford Digital Economy Lab.</p>
<p>Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., &amp; Mariman, R. (2024). <em>Generative AI Can Harm Learning</em>. The Wharton School Research Paper.</p>
<p>Stack Overflow. (2025, December). <a href="https://stackoverflow.blog/2025/12/26/ai-vs-gen-z"><em>AI vs Gen Z: How AI has changed the career pathway for junior developers</em></a>. Stack Overflow Blog.</p>
<p>IEEE Spectrum. (2025, December). <a href="https://spectrum.ieee.org/ai-effect-entry-level-jobs"><em>AI Shifts Expectations for Entry Level Jobs</em></a>.</p>
<p>Rest of World. (2025, December). <a href="https://restofworld.org/2025/engineering-graduates-ai-job-losses/"><em>AI is wiping out entry-level tech jobs, leaving graduates stranded</em></a>.</p>
<p>Kosmyna, N., et al. (2025). <a href="https://arxiv.org/abs/2506.08872"><em>Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task</em></a>. arXiv.</p>
<p>World Economic Forum. (2025). <a href="https://www.weforum.org/publications/the-future-of-jobs-report-2025/"><em>Future of Jobs Report 2025</em></a>.</p>
<p>FinalRound AI. (2025). <a href="https://www.finalroundai.com/blog/aws-ceo-ai-cannot-replace-junior-developers"><em>AWS CEO Shares 3 Solid Reasons Why Companies Shouldn’t Replace Juniors with AI Agents</em></a>.</p>
]]></content:encoded>
      <category>Business</category>
      <category>Methodology</category>
      <category>Junior Developers</category>
      <category>AI Impact</category>
      <category>Talent Pipeline</category>
      <category>Future of Work</category>
      <category>Learning</category>
      <category>Career</category>
      <atom:link rel="alternate" hreflang="it" href="https://ireneburresi.dev/blog/business/senior-domani/"/>
    </item>
  </channel>
</rss>