- Francisco Money
- Posts
- A big company was hacked by an AI
A big company was hacked by an AI
To be or not to be an AI wrapper?
Greetings!
Welcome to The Menu Magic - Finance & AI weekly newsletter
In today’s email:
A big company was hacked by an AI
To be or not to be an AI wrapper?
Meme of the week
A big company was hacked by an AI
Dear friend,
In a world where convenience often trumps caution, a new fraud has emerged that may make us all reconsider our digital security postures. Picture this: Retool, a reputable platform championed by industry giants, recently faced a sophisticated attack jeopardizing their cloud customer accounts. The culprit? A blend of spearphishing and cunning use of generative AI. Let's unpack this cautionary tale.
Retool, for the uninitiated, is not just another software company. Its platform enables titans such as Amazon and Lyft to seamlessly create applications that drive their businesses. These applications, and the sensitive data they manage, are typically safeguarded by secure single sign-on systems provided by services like Okta, often buttressed by multi-factor authentication (MFA).
But how did this digital fortress fall? It began with Retool announcing a migration to Okta. The attackers, seizing the opportunity, crafted a deceptive lookalike page to spearphish Retool employees. Despite most employees successfully sidestepping this trap, it took only one to falter, providing their credentials.
The attackers' ingenuity didn't stop there. To bypass MFA, they deployed a deep fake of the employee's voice to deceive Retool's IT support into issuing an additional MFA code, granting them access to the employee's Okta account. With this breach, the attackers roamed freely, compromising GSuite sessions and, alarmingly, all internal Retool systems.
This cascade of breaches raises a critical question: Is Google Authenticator's cloud MFA backup to blame? By allowing users to back up MFA seeds to the cloud, convenience was prioritized, inadvertently reducing the robustness of the two-factor authentication system to a single point of failure.
Echoed by recent attacks on other enterprises, it's evident that scammers are not only targeting the traditionally vulnerable; they're after high-stake targets. It reminds us that sometimes, a dash of friction—a slight inconvenience—is indeed a safeguard.
The cybersecurity landscape is slowly but surely shifting towards FIDO logins over one-time passwords, recognizing that security must evolve. Had Retool or Okta incorporated a system that analyzed the reputation of a device or its user's behavior, the abnormal access patterns could have been a red flag, potentially averting this breach.
The timeless wisdom in cybersecurity remains unchallenged: The weakest link is often not the technology but the people and processes that interact with it.
In light of this, what steps are we taking, both personally and within our organizations, to not be the weakest link? As we march towards a digital future, let's tread wisely, balancing the scales between security and convenience. It's not just about having strong chains; it's about ensuring there are no weak links.
Considering the sophistication of these attacks, how might we further inoculate our digital lives against such vulnerabilities? What could be the role of individual vigilance in the collective security of our interconnected digital ecosystem?
To be or not to be an AI wrapper?
Dear friend,
Let’s dive into a spirited debate that’s been stirring up the tech community: the efficacy and necessity of “GPT wrappers” in AI-powered products. Since 2019, the landscape of AI has evolved with whirlwind velocity, and with it, a pivotal question has emerged—what’s truly crucial for production in AI applications?
First and foremost, the user experience reigns supreme. End-users interact with the product's interface; they don't concern themselves with the gears grinding behind the screen. Whether you’re using GPT-4 or an in-house model, users remain indifferent; their allegiance lies with the seamless experience you provide.
Here's a reality check: unless your product is fundamentally unviable without a custom model, training your own AI models isn't the secret weapon you think it is. If GPT-4 serves the same purpose, your perceived competitive edge is, regrettably, a mirage.
For businesses considering the deployment of large language models (LLMs), the pivotal considerations are clear-cut:
1. Can the model deliver the speed and output quality your users demand?
2. What’s the financial threshold for achieving the desired output?
These considerations bring us to a crossroads, where cost and time investment weigh heavily. So, how should you navigate this decision-making process?
Here’s a practical guide:
When Searching for Product-Market Fit (PMF):
- Leverage closed APIs or hosted pre-trained open-source models that offer immediate, high-quality outputs, propelling you towards PMF with alacrity.
- If costs are unsustainable and you possess ample data, fine-tuning becomes logical, albeit a longer path to PMF. Use out-of-the-box APIs to perfect the user experience, and only then pivot to fine-tuning.
- The notable exception: when closed models fall short on complex or intricate tasks that are beyond the reach of GPT-4 or even a series of prompted GPTs.
Post-PMF:
- After establishing your product vision, the quest shifts towards optimizing cost, latency, and quality. There's no universal solution here.
- If GPT-3.5/4 is adequate but financially or technically inefficient, consider training your own model.
- Conversely, if GPT-3.5/4 aligns with your performance and economic requirements, stick with it.
- Should closed APIs prove inadequate for your task, training a custom model becomes inevitable.
Considerations for Reflection:
- Assess your profit margins—will using GPT-4 deplete them? If not, the ease of updating services by simply tweaking a prompt may far outweigh the benefits of fine-tuning.
- Evaluate the potential superiority of a trained model over GPT-4, especially in comparison to competitors using closed APIs.
- Anticipate the future—reductions in cost and the arrival of more advanced models like GPT-5. The trend towards affordability and efficiency is inexorable.
A combination of closed APIs and in-house models often serves at-scale use cases best. While the allure of training and fine-tuning is strong, we must recognize when it’s an unnecessary indulgence rather than a necessity.
In closing, remember that the heart of your product is the experience it delivers. The model is merely the vessel. As we look to the horizon, where might our focus on user experience and the simplicity of AI integration take us? How can we innovate while maintaining a lean, user-centric approach in an increasingly complex AI ecosystem?
P.S. In your quest for the perfect user experience, how do you balance the allure of cutting-edge AI with the practicalities of business needs? And as AI continues to evolve, how will we discern when to adopt, adapt, or altogether forgo the latest advancements?
Catch up soon, and let me know what you think!
Meme of the week

The wolf of AI
I'd love to hear your feedback on today's newsletter! Is there a specific type of content you'd like to see more of in the future? Since I'll be releasing a new edition each week, I welcome any suggestions or requests you may have. Looking forward to hearing your thoughts!
The Menu Magic is written by Francisco Cordoba Otalora, an AI entrepreneur living in London.
Share this newsletter with a friend