aitour-whats-new-with-fabric

MIT License

Stars
5
Committers
8

Microsoft Fabric: What's new and what's next?

This repo is a companion to this session at Microsoft AI Tour, a worldwide tour of events.

Learn more about Microsoft AI Tour on the official website.

Session Description

The analytics landscape for data focused application developers is changing fast. With AI, Real-time analytics, data sources growing exponentially and new tools available daily. In Microsoft Fabric, data, AI, BI and application development professionals have an integrated platform enabling data source unification, simplified yet feature rich user experience as well as security and governance built in. Join us to learn and see the latest announcements from Build and what the future holds.​

Learning Outcomes

  • Understand the value of Microsoft Fabric for your organization​
  • See how Microsoft Fabric can empower both business users and all data roles​
  • Discover the latest product updates and announcements​

Technology Used

  • Microsoft Fabric
  • Azure SQL
  • Copilot
  • Teams

Additional Resources and Continued Learning

You can find additional resources, including the slides of the presentation, here.

If you will present this talk, you can find the session delivery resources here.

Content Owners

Responsible AI

Microsoft is committed to helping our customers use our AI products responsibly, sharing our learnings, and building trust-based partnerships through tools like Transparency Notes and Impact Assessments. Many of these resources can be found at https://aka.ms/RAI. Microsoft’s approach to responsible AI is grounded in our AI principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Large-scale natural language, image, and speech models - like the ones used in this sample - can potentially behave in ways that are unfair, unreliable, or offensive, in turn causing harms. Please consult the Azure OpenAI service Transparency note to be informed about risks and limitations.

The recommended approach to mitigating these risks is to include a safety system in your architecture that can detect and prevent harmful behavior. Azure AI Content Safety provides an independent layer of protection, able to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful. Within Azure AI Studio, the Content Safety service allows you to view, explore and try out sample code for detecting harmful content across different modalities. The following quickstart documentation guides you through making requests to the service.

Another aspect to take into account is the overall application performance. With multi-modal and multi-models applications, we consider performance to mean that the system performs as you and your users expect, including not generating harmful outputs. It's important to assess the performance of your overall application using Performance and Quality and Risk and Safety evaluators. You also have the ability to create and evaluate with custom evaluators.

You can evaluate your AI application in your development environment using the Azure AI Evaluation SDK. Given either a test dataset or a target, your generative AI application generations are quantitatively measured with built-in evaluators or custom evaluators of your choice. To get started with the azure ai evaluation sdk to evaluate your system, you can follow the quickstart guide. Once you execute an evaluation run, you can visualize the results in Azure AI Studio.