NASA & IBM Release Surya: First Open-Source AI Model for Heliophysics
NASA and IBM have released Surya on Hugging Face, the world’s first open-source AI foundation model for heliophysics. Surya is a 366M-parameter transformer model, trained on nine years (~218 TB) of solar observational data from NASA’s Solar Dynamics Observatory (SDO). The training data covers 8 Atmospheric Imaging Assembly (AIA) channels and 5 Helioseismic and Magnetic Imager (HMI) products—providing a rich multi-instrument view of the Sun’s activity.
Why It Matters:
It is not just another foundation model; it’s the first time we’ve seen AI built specifically to decode the Sun at scale. Unlike general-purpose LLMs, this model is trained on nearly a decade of multi-instrument solar data, giving scientists a tool that can spot patterns humans would miss and run forecasts faster than physics-only simulations. As society’s dependence on satellites, GPS, aviation, and power grids grows, space weather forecasting is moving from “nice-to-have science” to “critical infrastructure defense.” It also sets a new template for domain-specific AI: if heliophysics can benefit from its own foundation model, climate, agriculture, and planetary defense may be next.
NIST AI Risk Management Framework Playbook
During this week, while working on one of the projects with the customer, questions arose about the risks of Gen AI and how to develop a framework within the organization to address key areas. This triggered me to go to the NIST AI Risk Management Framework (AI RMF) Playbook, which I had a chance to review a few months back, and it looked to me like a vital resource for organizations aiming to develop, deploy, and manage AI systems responsibly.
While specific to this customer scenario, I spent some time during the week, and we collectively had a few sessions on it and concluded that it provides actionable guidance to achieve the outcomes outlined in the AI RMF Core, focusing on four key functions: Govern, Map, Measure, and Manage. So here is what I am sharing, what I understood.
Source: NIST AI RMF Playbook
Map: Establishing Context for AI Risk Identification
The Map function is foundational, enabling organizations to understand the context in which an AI system operates and identify associated risks. By mapping the AI system's purpose, usage, and stakeholders, organizations can pinpoint potential risks early in the lifecycle. This involves documenting system objectives, data sources, and stakeholder perspectives to ensure transparency and alignment with organizational goals. The Map function ensures that risks are framed within the specific context of the AI system, setting the stage for effective measurement and management.
Key Role: Provides a comprehensive understanding of the AI system’s context, enabling proactive risk identification and informing subsequent functions. Without this step, organizations may overlook critical risks stemming from system design or deployment settings.
Measure: Assessing and Monitoring AI Risks
The Measure function employs quantitative, qualitative, or mixed-method tools to analyze, assess, and monitor AI risks and their impacts. It builds on the context established in the Map function by evaluating system performance, trustworthiness, and potential biases. Regular testing before and after deployment ensures that AI systems align with trustworthy characteristics such as fairness, reliability, and security. By tracking metrics and documenting outcomes, organizations can maintain accountability and make data-driven decisions to mitigate risks.
Key Role: Enables organizations to quantify and monitor risks, ensuring systems remain trustworthy and compliant with organizational and regulatory standards throughout their lifecycle.
Manage: Mitigating and Responding to AI Risks
The Manage function focuses on allocating resources to address identified and measured risks, implementing plans for incident response, recovery, and continuous improvement. It leverages insights from the Map and Measure functions to prioritize risks and deploy mitigation strategies, such as regular monitoring, stakeholder feedback integration, and system updates. This function ensures that organizations can respond to incidents, reduce negative impacts, and enhance system resilience over time.
Key Role: Translates risk insights into actionable strategies, fostering resilience and accountability while minimizing system failures and societal impacts.
Key Takeaways for Organizational Implementation:
The Playbook is not a rigid checklist but a voluntary set of suggestions. Organizations should tailor their recommendations to their specific industry, use case, and risk tolerance, selecting only the actions that align with their needs.
Start with the Map function to establish a clear context for AI systems. Document system objectives, stakeholder perspectives, and data provenance to identify risks early and ensure alignment with organizational goals.
Use the Measure function to conduct regular testing and track metrics for trustworthiness, such as fairness and reliability. Incorporate standard software testing methods and stakeholder feedback to maintain system integrity.
Leverage the Manage function to create incident response, monitoring, and continuous improvement plans. Engage diverse stakeholders and document decisions to enhance transparency and accountability.
Integrate AI RMF functions into organizational policies and training programs. Senior leadership commitment and clear role assignments are critical to embedding a culture of responsible AI development.
Recipe for Organizational Implementation: To operationalize the NIST AI RMF Playbook:
Step 1: Familiarize and Assess: Study the AI RMF and Playbook to understand its functions. Identify all AI systems within your organization and assess their risk profiles.
Step 2: Map Risks: Document the context, purpose, and stakeholders for each AI system. Identify potential risks, including biases and societal impacts, using stakeholder input.
Step 3: Measure Performance: Implement testing protocols to evaluate system trustworthiness. Use metrics to monitor fairness, reliability, and security, and document results.
Step 4: Manage Risks: Develop mitigation strategies, including incident response and monitoring plans. Engage stakeholders regularly and update systems based on feedback.
Step 5: Embed Governance: Integrate AI RMF practices into organizational policies, ensuring senior-level support and ongoing training for AI actors.
By leveraging the Map, Measure, and Manage functions, organizations can build trustworthy AI systems that balance innovation with accountability, ensuring responsible deployment in alignment with their goals and societal values.
Gen AI Maturity Framework:
It is deployed on GenAIMaturity.Net, and you can try out Maturity Assessments. Several resources are available for you to go through. This entire portal is vibe-coded, and content is being reviewed and added frequently.
Cohere launched Command A Reasoning, a powerful 111B-parameter open-weight model designed for enterprise-grade reasoning. It supports tool integration, handles multilingual tasks (23 languages), and features a 256K token context window, making it ideal for long workflows and agent-based use. The model can toggle “reasoning” mode to trade off precision or speed, and runs effectively on a single H100 or A100 GPU.
Why It Matters:It’s built to think and act like an enterprise assistant. By offering reasoning, tool execution, and massive context length in one flexible package, Cohere lets companies consolidate AI workflows that used to require multiple models. It simplifies deployment, cuts costs, and scales automation without losing depth or accuracy. For businesses running AI internally, Command A Reasoning is a rare blend of power, efficiency, and control.
The Cloud: the backbone of the AI revolution
Delivering the Power of Frontier Models: Oracle's Collaboration with Google. source
Think SMART: How to Optimize AI Factory Inference Performance source
Anthropic has teamed up with academic experts Prof. Joseph Feller from University College Cork and Prof. Rick Dakan from Ringling College to introduce an AI fluency course. This course provides practical skills for effective, efficient, ethical, and safe AI interaction. It offers valuable content for everyone, whether you're new to Claude or an experienced AI user.
Potential of AI:
Tom Brown co-founded Anthropic after contributing to the development of GPT-3 at OpenAI. As a self-taught engineer, he improved from earning a B-minus in linear algebra to becoming a leading figure in AI's scaling advances.
Dubai Future Foundation, under the guidance of His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, has introduced the world’s first Human–Machine Collaboration (HMC) icon system. This visual framework allows creators to declare the level of AI involvement, ranging from “All Human” to “All Machine,” and identify specific content stages where AI contributed, such as ideation, data analysis, writing, visuals, and more.
Implementation is mandatory for all Dubai government entities, while creators worldwide are encouraged to adopt the icons voluntarily for transparency and accountability.
My Take
The HMC icons are more than labels; they’re a trust layer. As GenAI becomes ubiquitous in content creation, everyone needs clarity, not catchy slogans. These icons deliver that clarity: simple, standardized, and scalable.
Therefore, AI Tech Circle will begin adopting HMC icons across this newsletter. I am committed to declaring human vs. AI involvement explicitly.
Most people still equate Generative AI with chatbots that answer questions. But its real business value is emerging in less visible, workflow-transforming roles:
AI can unify fragmented data across PDFs, intranets, and SaaS tools, turning unstructured knowledge into decision-ready summaries. That’s a CFO’s dashboard upgrade, not a chatbot.
Agentic AI systems now act as coordinators: filing expense reports, updating CRMs, reconciling invoices, or scheduling campaigns. This is back-office automation with human-like flexibility.
AI drafts product sketches, generates regulatory documents, or simulates scenarios for engineering teams. These aren’t conversations; they’re accelerators for innovation pipelines.
LLMs monitor transactions, contracts, or communications in real time, flagging anomalies before auditors do. This reduces exposure in ways old rule-based systems never could.
The Opportunity...
Podcast:
This week's Open Tech Talks episode 162 is "The Importance of Data Sovereignty in AI Workflows with Giorgio Natili". He is Vice President and Head of Engineering at Opaque Systems.
Airi: Self-hosted, you owned Grok Companion, a container of souls of waifu, cyber living to bring them into our world.
Sim is an open-source AI agent workflow builder. Sim's interface is a lightweight, intuitive way to rapidly build and deploy LLMs that connect with your favorite tools.
The Investment in AI...
TinyFish, an AI startup, has raised $47 million in Series A funding to expand its platform for creating and deploying AI-powered web agents.
Firecrawl has secured a $14.5 million Series A funding round. It is a developer platform that unlocks web data for developers and AI agents.
That’s it for this week - thanks for reading!
Reply with your thoughts or favorite section.
Found it useful? Share it with a friend or colleague to grow the AI circle.
Until next Saturday,
Kashif
The opinions expressed here are solely my conjecture based on experience, practice, and observation. They do not represent the thoughts, intentions, plans, or strategies of my current or previous employers or their clients/customers. The objective of this newsletter is to share and learn with the community.
You are receiving this because you signed up for the AI Tech Circle newsletter or Open Tech Talks. If you'd like to stop receiving all emails, click here. Unsubscribe · Preferences
AI Tech Circle
Kashif Manzoor
Learn something new every Saturday about #AI #ML #DataScience #Cloud and #Tech with Weekly Newsletter. Join with 278+ AI Enthusiasts!
Your Weekly AI Briefing for Leaders Welcome to your weekly AI Tech Circle briefing - highlighting what matters in Generative AI for business! I'm building and implementing AI solutions, and sharing everything I learn along the way... Feeling overwhelmed by the constant stream of AI news? I've got you covered! I filter it all so you can focus on what's important. Today at a Glance: Building Generative AI Maturity Portal 3 Generative AI Use Cases from the UK Gov AI Weekly news and updates...
Your Weekly AI Briefing for Leaders Welcome to your weekly AI Tech Circle briefing - highlighting what matters in Generative AI for business! I'm building and implementing AI solutions, and sharing everything I learn along the way... Feeling overwhelmed by the constant stream of AI news? I've got you covered! I filter it all so you can focus on what's important. Today at a Glance: Building Generative AI Maturity Portal Generative AI Use Case AI Weekly news and updates covering newly released...
Your Weekly AI Briefing for Leaders Welcome to your weekly AI Tech Circle briefing - highlighting what matters in Generative AI for business! I'm building and implementing AI solutions, and sharing everything I learn along the way... Over the last few weeks, the summer vacation period has begun as summer started, and kids were off from school. We took a break for a few weeks to visit my parents & other family members, as it is always refreshing & a blessing to spend time with my mother. I am...