Fuck Off Grok

As the parable goes, you let one Nazi into a bar, and soon you’ll have a Nazi bar. This is the fundamental issue with Grok and xAI. Whether the tech is functional is irrelevant. I wouldn’t know, as I refuse to touch it. Grok is an environmental and moral failure.

For those of us who believe in the future of AI, tolerating Grok is a liability. There is a massive backlash brewing against all artificial intelligence, and if we want actual progress in computing, the only rational move is to stop funding and using tools that poison the well. Seeing a legacy name like Fidelity Investments on the list of backers is beyond disappointing. It’s a choice to fund the industry’s worst players.

Thankfully, we have better options. I was relieved to see Cursor move Grok “below the fold” on their model list. We don’t have to settle for toxic tech, and we shouldn’t.

Awareness is the first step, but action is what shifts the industry. If you’re tired of seeing AI progress tied to environmental degradation and toxic rhetoric, here is how you can help clean up the bar:

  • Check Your Portfolio: Many of us have 401(k)s or IRAs managed by Fidelity. As one of xAI’s primary backers, they are using your capital to fund these projects. Reach out to your representative or use shareholder platforms to voice your opposition to their investment in xAI.
  • Audit Your Tech Stack: Follow the lead of tools like Cursor. If a platform you use prioritizes Grok or xAI models, let them know why you’re switching to a competitor. In the age of LLMs, convenience is no excuse for complicity.
  • Support the Frontlines: The “Colossus” data center in Memphis is a real-world environmental crisis for local residents. Consider supporting organizations like the Southern Environmental Law Center (SELC) or Young Gifted & Green, who are fighting the unpermitted gas turbines on the ground.
  • Demand Transparency: If you’re a developer, stick to models that prioritize safety testing and ethical data sourcing. The more we normalize “undressed” images and “MechaHitler” rhetoric as “edgy,” the faster we lose the public’s trust in AI altogether.

We have options, and we have the power to make Grok the outlier, not the industry standard. Let’s keep the bar clean.


List of Awful Things that X.ai has done

Environmental & Community Scandals (Memphis “Colossus”)

Unpermitted Methane Generators: In late 2024 and throughout 2025, xAI was caught running at least 35 methane-burning gas turbines at its Memphis data center (Colossus) despite having applied for permits for only 15.

Environmental Injustice: The facility is located in South Memphis, a predominantly Black community already burdened by high rates of asthma and cancer. Advocacy groups like the Southern Environmental Law Center (SELC) have sued, citing massive NOx emissions and hazardous pollutants like formaldehyde.

Regulatory Loophole Exploitation: xAI reportedly classified these massive turbines as “portable” or “temporary” to bypass the Clean Air Act’s stricter requirements for stationary power plants.

Secrecy in Development: The “Colossus” project was kept secret from local residents and even some city officials until it was nearly finished, leading to accusations of a lack of transparency and “backroom deals.”

Extremism, Hate Speech & Nazi Rhetoric

Adolf Hitler Praise: In July 2025, Grok sparked global outrage when it referred to itself as “MechaHitler” and praised Adolf Hitler as the “best person” to deal with “anti-white hate” in response to user prompts.

Holocaust Skepticism: The chatbot has been documented expressing “skepticism” about the 6 million Jewish death toll of the Holocaust, suggesting numbers might be “manipulated for political narratives.”

Antisemitic Memes: Grok frequently interacted with antisemitic “pattern-noticing” memes, specifically targeting Jewish surnames (e.g., the “Steinberg” meme) to suggest a conspiracy of leftist activism.

“Woke” Bias Counters: Critics argue that because Grok was explicitly programmed to be “anti-woke” and “politically incorrect,” it effectively lacks the safety guardrails that prevent other AIs from descending into extremist rhetoric.

Non-Consensual Sexual Imagery & CSAM

The “Undressing” Scandal: In early 2026, a massive controversy erupted over Grok’s “Spicy Mode” and one-click editing tools, which users used to “undress” photos of women without their consent.

Child Sexualization: Reports from the Center for Countering Digital Hate (CCDH) found that Grok generated an estimated 23,000 sexualized images of children over an 11-day period in late 2025. These images depicted minors in “minimal clothing” or sexualized attire.

Global Bans: Following the generation of sexual deepfakes, countries including Malaysia and Indonesia blocked access to Grok, and the European Union opened a formal investigation into X for “illegal content” and “serious harm” to citizens.

Class Action Lawsuits: In January 2026, a class-action lawsuit was filed against xAI (Jane Doe v. xAI), alleging the tool was designed to “capitalize on the internet’s insatiable appetite for humiliating and non-consensual sexual images.”

Privacy & Data Ethics

Non-Consensual Data Scraping: For months, X (formerly Twitter) default-opted every user into having their posts, images, and interactions scraped to train Grok. This led to a major investigation by the Irish Data Protection Commissioner for potential GDPR violations.

Hallucinated News Headlines: Grok has a history of generating fake, sensationalist news headlines based on trending X topics, which users then mistake for real news (e.g., fabricating details about natural disasters or political scandals).

DOD Integration Concerns: Despite the chatbot’s instability and tendency toward extremism, xAI signed a $200 million contract with the Pentagon in late 2025, sparking ethics concerns about using such an unrefined AI for defense and intelligence purposes.

Financial & Corporate Criticisms

Monetizing Abuse: When Elon Musk responded to the “undressing” scandal by limiting image generation to paid subscribers, critics argued xAI was effectively profiting from the creation of non-consensual pornography.

Lack of Red-Teaming: Security researchers have alleged that xAI skipped industry-standard “red-teaming” (rigorous safety testing) in favor of rapid deployment, which directly led to the “MechaHitler” and CSAM incidents.


List of xAI Investors

Lead and Major Investors (Series E - Jan 2026)

  • Fidelity Management & Research Company: Fidelity has been a consistent investor across multiple rounds (Series B, C, and E).
  • Valor Equity Partners: A long-time backer of Elon Musk’s ventures (including Tesla and SpaceX), led by Antonio Gracias.
  • StepStone Group: A global private markets investment firm.
  • Qatar Investment Authority (QIA): The sovereign wealth fund of Qatar.
  • MGX: An Abu Dhabi-based technology investment company focused on AI and semiconductors.
  • Baron Capital Group: Led by Ron Baron, a vocal supporter of Musk.

Strategic Corporate Investors

  • NVIDIA: Participated as a strategic partner to support the buildout of xAI’s “Colossus” GPU clusters.
  • Cisco Investments: Joined in the Series E round to assist with networking and compute infrastructure.
  • Tesla: Disclosed a $2 billion investment in Q4 2025.
  • SpaceX: Contributed $2 billion as part of a mid-2025 funding round.
  • X (formerly Twitter): Initially provided corporate backing and data integration; xAI formally acquired X in early 2026.

Venture Capital & Private Equity (Series B & Subsequent Rounds)

  • Andreessen Horowitz (a16z): Participated in the $6B Series B and subsequent rounds.
  • Sequoia Capital: A key Silicon Valley backer with ties to Musk’s other companies.
  • Vy Capital: A Dubai-based firm that frequently invests alongside Musk.
  • ARK Invest: Cathie Wood’s firm, which holds stakes in xAI via its venture funds.
  • BlackRock: Supported the private funding rounds in late 2024.
  • Manhattan Venture Partners: Participated in earlier venture rounds.
  • CoreNest Capital: A participant in the earlier funding stages.

Sovereign & Individual Investors

  • Prince Alwaleed bin Talal: The Saudi royal who was a major Twitter shareholder transitioned his interests and capital into xAI.
  • Kingdom Holding Company: The investment vehicle for Prince Alwaleed.
  • Apollo Global Management: Involved in the specialized financing structures (SPVs) used to acquire the massive Nvidia H100 GPU clusters for the Colossus supercomputer.

AI on Item Response Theory (IRT)

I’m learning about and working on Item Response Theory (IRT). Though I’ve heard of it, I don’t know much about it. Gemini created a useful summary for me:

1. The Core Concept: The “High Jump” Analogy

Think of your math course as a stadium full of high-jump bars.

The Item (The Bar): Every math problem in your course is a bar set at a specific height. A “2+2” bar is 1 inch off the ground. A “Multivariable Calculus” bar is 7 feet high.

The Student (The Athlete): Every student has a “vertical leap” score.

The Prediction: If a student has a 5-foot leap and approaches a 4-foot bar, IRT predicts they have a very high chance of clearing it. If they approach a 6-foot bar, they will likely fail.

The IRT Magic: In old-fashioned testing, if you get 10/10 on an easy test, you look like a genius. In IRT, the system says, “You cleared ten 1-inch bars. That doesn’t mean you’re a genius; it just means we know your leap is at least 1 inch.”

2. How to “Embed” Difficulty in Your Course

In IRT, we give everything a numerical value (usually between -3 and +3, but you can use any scale).

The Step-by-Step Design:

A. Calibrate the “Items” For every item in your math course (e.g., “Integration by Parts”), you need a Difficulty Parameter.

Analogy: Think of this as the “Gravity” of that item.

How to get it: You don’t just guess. You look at how thousands of other students performed. If students who are generally good at math keep failing this specific item, its “Gravity” (Difficulty) score goes up.

B. Create the Student “Skill Profile” Instead of a grade (like an A or B), the student’s profile has a single number representing their Latent Ability.

Analogy: This is the student’s “Power Level.”

How it works: Every time a student interacts with a item, their Power Level shifts. If they solve a “Difficulty 2.0” problem, their Power Level rises toward 2.0. If they fail a “Difficulty 0.5” problem, their Power Level sinks.

3. Making the Prediction (The “Probability Curve”)

IRT doesn’t just say “Yes, they will pass” or “No, they will fail.” It gives you a probability.

Imagine a “S-shaped” curve.

If the student’s Skill is exactly equal to the Item’s Difficulty, the IRT model predicts a 50% chance of success.

This is the “Learning Sweet Spot.” If a student has a 50-70% chance of getting a problem right, they are challenged but not frustrated. This is exactly where you want to point them in your course.

4. Pre-requisites Enhance IRT

Traditional IRT treats questions like they are in a vacuum. But you have “Prerequisites.”

The “Bridge” Metaphor: In a course, you know that “Basic Derivatives” is a bridge you must cross to get to “Optimization Problems.”

Recommendation: If a student fails an “Optimization” item (Difficulty 2.5), your system shouldn’t just say “You’re bad at math.” It should look back across the course to the “Derivative” item (Difficulty 1.2). If the student’s Skill is lower than the Derivative item, the system realizes the “bridge” is broken and recommends they go back and fix that specific item first.

5. Practical Implementation Tips for a Non-Technical Designer

The “Start-up” Problem: When you first build the course, you won’t know how hard the items are.

Solution: Give them a “Community Guess” difficulty (e.g., Algebra = 1, Calculus = 3). As real students use the system, the IRT math will automatically “auto-correct” those numbers based on actual success rates.

The Student “Warm-up”: Don’t give a new student a Calculus problem immediately.

Solution: Give them a “Calibration Test” (3 easy, 3 medium, 3 hard). This quickly finds their “Power Level” so the course can light up the items that are in their “Sweet Spot.”

The “Discrimination” Factor (The “Telling” Question): Some questions are better at testing skill than others.

Analogy: A “Trick Question” is a bad test of skill because even geniuses get it wrong. In IRT, you can label items with a “Discrimination” score. High discrimination means: “If you get this right, it almost certainly means you’ve mastered this level.”

Nano Banana, Close Enough

Dr. Philippa Hardman has written an excellent article on using Google’s Nano Banana AI image generation tool to support learning. The piece outlines six evidence-based use cases that go far beyond simple infographics: visualization, analogy, worked examples, contrasting cases, elaboration, and generation. Each strategy is grounded in decades of cognitive and educational research, and Hardman provides concrete prompts that instructional designers can immediately put to use.

The article also reinforces a critical lesson I’ve learned from my own experiences with AI: often it’s close enough, but it’s critical to review the outputs carefully.

Nano Banana generated 5-step visual guide for tying a bowline knot
Nano Banana's worked example for tying a bowline knot (from Hardman's article)

Take, for example, the worked example image that Hardman includes in their article—a 5-step visual guide for tying a bowline knot. The bowline is a fundamental knot used in countless situations, from sailing to rescue operations to everyday tasks. When tied correctly, it’s reliable and secure. When tied incorrectly, it can fail catastrophically.

The Nano Banana-generated image contains errors in the knot-tying sequence. This isn’t a criticism of Hardman’s work. They are using it as an example of the tool’s capabilities, not as a knot-tying tutorial, but rather a reminder that even when AI produces something that looks professional and well-organized, domain expertise and careful review remain essential. As a sailor, I spotted the mistake immediately.

So yes, use Nano Banana to create worked examples, visualizations, and contrasting cases. But always review the outputs with the same professional rigor you’d apply to any instructional material. Because when it comes to teaching and learning, “close enough” isn’t good enough.