Unknown

0 views13723 WordsCopy TextShare
Unknown
Video Transcript:
[Music] [Music] [Music] [Music] [Music] [Music] good morning everyone thank you for for joining us today uh as we Embark in this exploration of the European Union artificial intelligence act also known as the AI act I'm Arnold latil from saburn University and sunburn sbor Center for AI and I have the great pleasure to organize and to present this webinar um uh in uh uh this webinar sorry uh I organized this webinar with my colleague Mario odak from arwa University will we will be moderate the next webinar on the 17th of December we are standing today at
the heart of the European commission uh and in the DG connect which host the um the AI office and um uh we also have the great pleasure to welcome member of the AI office today with us so kilan gross welcome you're the head of unit at G connect unit A2 uh which deals with the AI and Regulation and compliance policy so you're in charge of the implementation of the AI act welcome enina orish welcome uh you're the head of sector for governance and Regulatory and you are working on AI uh regulation uh from a long
time and then Laura yug girl welcome you're legal and policy officer in skan unit and um you are closely um working with EU member states all of you were closely involved with in the preparation and negotiation of the AI act thank you for being with us today uh this seminar this webinar is part of the AI Pact uh just a few word about the aict aict is an initiative launched by the commission uh in last May in order to interact directly with a stakeholder um uh regarding the AI act uh which enter into force on
last 1 of August we'll discuss the main the rational of this text and the mains provision of the AI act in order to to be uh to to to give Clarity for everyone as we have embark on a demanding implementation period uh this seminar will be uh divided in three parts um three main points the the first part of the sem of this webinar will be dedicated to the ACT objective and architecture the second part will be dedicated to the risk based approach adopted by the act by the Act and the third part will focus
on governance framework which the ey office is obviously a component um and we aim to make this session interactive so uh therefore um uh each uh session is of the of three part of the webinar uh will be following by a Q&A a Q&A session using the slido application so now you are supposed to see a a QR code on your screen um to connect to slido and the hashtag for the slido is is AI in Europe so connect to slido feel free to participate and to ask a question at the end of each of
the three um topic um but before we dive into this agenda let's start uh with a short video from lucilia Shi director of the AI office was recorded a brief uh short welcome message for us good morning I'm delighted to be here today to open the AI pack webinar and welcome you to this possibility for engaging directly with some of my colleagues responsible for the implementation of the I act together you will explore the objectives and the structure of the iact its risk-based approach its governance framework and of course uh the European AI office that
I lead as a director the ACT came into force on the 1 of August this year and has attracted a lot of interest about its implementation its impact on citizens businesses and public services across the EU and Beyond the EU this is why these webinars are so important to us in the ey office we are organizing them as part of the AI pact an initiative launched by the European Commission in May these policy is designed to support early compliance with the I act which will become fully applicable in two years it also aims to identify
and share best practices for us The Pact represents a strong commitment to helping businesses and organizations navigate the implementation of the AI act it is also a call for Collective action within and Beyond Europe the aact has already proven to be a success to date over 1,400 organizations have joined its first pillar which Fosters a collaborative Community where participants can share experiences best practices engage in webinars like this one and the second pillar promotes early action aligned specifically with the objective of the a act where companies can pledge certain commit ments and we already have
130 companies including multinationals but also smmes committing to early compliance with the key provisions of the AI act these companies span all the sectors of the economies from uh it telecoms healthc care banking automotive and Aeronautics and theact offers a unique opportunity for us to engage with many organizations as you will do today with my colleagues to discuss the key aspects of the I act after today's session and the second one in December we will continue with a series of more focused webinars starting in 2025 at the beginning of 2025 and aligned with implementation agenda
under my leadership the AI office is committing to building on the feedback and interest that you share with us through these webinars before you all start I also like to extend my gratitude to our experts artil and Maro Dak who will be moderating today's session and the up cabin webinar in December respectively so I thank you for all uh for your interest I wish you an excellent webinar and I look forward to seeing you in the next one thank you very much okay following the word from Lui let's move to the first part of this
webinar dedicated to the ACT objective so the European Union has made it clear that it wants to aim uh the the leadership a Global Leadership on AI and especially in responsible Ai and Trust forth AI so the the the three main objective of the text are building trust fostering Innovation and protecting citizens right so let's dive to to the first part um protecting uh um building Trust uh so my my first question is for you kilan um about trust how is the EU uh aiming to build trust in AI well thanks a lot for for
this question because trust is indeed the key feature of this legal text because we want to create an internal market for AI but for AI which is trustworthy and we do this by uh setting up a product framework for AI so we treat AI That's the first message we treat AI as a product which we want to make safe and I think we have Europe quite a tradition of making good products like cars medical devices and now we look at Ai and we will use the same mechanism to make AI successful but safe uh this
regulation is horizontal so it will touch on all AI That's Another important point so we don't distinguish between AI being embedded in the product or being used in a social context it is uh um and and we have clear criteria how we want to make this safe and the objective of the ACT is to protect fundamental rights safety and to create by this trans trustworthiness by doing so we want to create trust along the value chain and this of course requires as well transparency so we put a lot of emphasis in the AI on transparency
B transparency between the provider and the deployer of an AI system B transparency uh by using generative AI or being be transparency on the for a provider of a general purpose AI model Visa V the downstream provider or the AI office okay thank you and regarding the two other objective um now Arena um can you tell us about the history of the commission action on on AI yes we started a long time ago in the early 2000s with supporting research projects in Ai and Robotics and AI was always on the screen as something to be
supported but then we entered more in the policy debate um from 2017 and came out with a first strategy on AI for Europe in 2018 based on three pillars so we have the Innovation and the competitiveness we have the social economic questions in particular with regarding to the future of work and we have all the ethical and legal questions and ever since then our policy was based on the question of innovation competitiveness legal ethical and we always consider this to be two sides of the same coin so we had a high level expert group to
give us some advice on trustworthy AI having guidelines but they also gave us advice on how to Foster competitiveness we had the white paper having the two of them and now we're having the AI act but we're also having a strategy on generative Ai and we'll be putting forward a new strategy soon okay can you tell us a bit more about this high level expert group uh on trustworthy I uh you you just mentioned uh what are the the relation with the AI act it was an inspiration for for the legislature yes absolutely um so
there we got um 52 experts from industry from Civil Society from Academia so really everybody around Ai and they were discussing ethical principles for trustworthy Ai and out of these principles they they then put forward to us actually this is really one of the fundaments of the AI act indeed you we had to transpose them into legal language into legal um legal things that are actionable but a lot is based on their work okay and regarding um uh fostering Innovation and protecting citizen rights how the text uh uh interacts uh this two objective um so
Innovation is absolutely key and let me just say before um if something happens with an AI you're liable anyway with or without AI act but the AI act helps you to design a high-risk AI in a way that avoids liability so it creates legal certainty legal certainty is clearly a factor for Innovation then all the requirements we are having in the AI act to have this legal AI are going to be standardized so once the standards are there it will be relatively easy to implement um and then we do have something I think we're really
proud of in the AI act the regulatory sent boxes so the possibilities for companies for AI providers to develop jointly with the regulator D Ai and Conformity with the law okay we we we we we'll talk later about how the deals with fundamental rights privacy but first we let's let's go to another topic about the AI architecture and now Laura I have a question for you about the architecture of the text the text is quite massive and complex more than 100 articles 200 recitals and 1 annexes it's it's a it's it's a massive text um
so uh just to have a short overview of the text could you explain us how uh what are the main building blocks of the text La please yes indeed happy to and you're are very right it's a very comprehensive document 113 articles some of them with as many as 10 paragraphs so the the way um the AI Act is built is very common to uh EU legislation we start out with General Provisions in which we set out the subject matter the objectives the definitions that apply for the text and with within our general Provisions we
also have one article that concerns AI literacy because that is also horizontal uh then we enter into the next block which are the rules that will apply for providers and deployers of AI systems and of general purpose AI models um starting uh with the prohibitions then obligations that apply in relation to high-risk AI systems and also the classification of them um the rules related to transparency and uh finally the block block on obligations for providers of general purpose AI models so that's uh the block with the substantive rules then we have the block that we
you've just touched upon um that concerns Innovation where we set out the conditions for AI regulatory sandboxes for real world testing of high-risk AI systems another possibility um to or another regulatory tool we have integrated to support Innovation and we also include a couple of support measures for um small and mediumsized Enterprises and startups some derogations from the obligations so that's all the block on Innovation and then finally of course we have a block that sets up the governance because um rules always come with an supervision and enforcement scheme and there we um attribute the
responsibilities of the responsible authorities the powers that they will have the powers that the AI office will have for some parts of the AI act and also because governance is more comprehens ensive and justest enforcement under the AI act um The Advisory bodies and then very finally the text closes with some final Provisions that are about keeping the AI act up to dat um the fines and the entry into application about which we'll talk later thank you laa um now let's turn to um to the interplays between the act and other key EU regulation it's
a very important uh topic I'm sure a lot of uh people are asking theel about the the interplace between on the one and the AI act and on the other end other key EU legislation um which apply to AI as the gdpr the DSA on the dma on copyright law cyber security law so uh this is a question for for you Laur what are the relationship between on the one n the act and on the other end all the others leg ation uh on on AI on on the side on AI kilan mentioned already that
the AI act will apply horizontally so it applies across all sectors um and when designing the AI act this was a deliberate choice we already had a lot of product um or we have product um safety regulation out there and designing the AI act as a horizontal product regulation allowed to um have a complimentary interplay with them so to the extent the AI act will apply to products that are already regulated by product regulation there's a harmonization of proced GPR and copy law uh indeed so that's uh the one dimension the second dimension is of
course as you mentioned the digital rule book which we already have um to set it out the AI Act is without prejudice this means um all of these other regulations they will continue to apply the AI act has been designed to be complimentary to the gdpr to the digital rule book that includes the Digital Services act the digital markets act and mutually reinforcing and you see this throughout for example um for the Digital Services act uh there there's a huge interlinkage when it comes to the question of generative AI transparency around AI generated content obligations
that are being put forward under the AI act will support the effective enforcement of the Digital Services act when it comes to the gdpr we foresee um special Corporation provisions and uh um the interplay and data uh access requests between authorities that enforce on the one hand uh the general data protection regulation and on the other hand the AI act okay thank you and so uh in addition to this relationship um the act relates with the new legislative framework um the the new legislative framework approach is well known from stakeholders but um I'm not sure
that AI experts uh I are um really aware of this regulation could you kilan explain us what is the new legislative approach and how does it interplay with with the AI act well the new legislative framework is a is an approach which is not so new in the commission if one is honest which we use since the late 80s because we do not want to prescribe in detail technical requirements in legislation because technical requirements move on all the time and change and this is of course in particular true for digital U digital products uh therefore
in the the new approach legislation what you have is rather high level requirements which are then underpinned with with standards and the standards are developed in a co-regulatory approach and they they are adapted over time so this is much more flexible much more agile the new legislative framework is indeed the core of the regulation we have as well in anex one section B little section on Old approach legation but there we basically modify the sectorial legation so what do we do now is we basically I mentioned already we follow the product based approach so we
we treat it as a product that has a few consequences it means that the provider so the one really developing and bringing the AI to the market has a bulk of the obligations he or she has to make sure that the product the AI is safe that it complies with the requirements which are then further explained in the industry standards he or she uh has and as a consequence the deployer the one using the AI has a rather um has a rather simple set of rules so he can rely on this the deployer the provider
has to make sure that the requirements are fulfilled by doing an ex an carrying out an ex anter Conformity assessment where the compliance with the five requirements is checked this can be self assessment or this can be an assessment by a notified Body by a third party as we say as a result he receives a c label and this is a big advantage and this is as well where we want to marry with the idea of innovation and supporting AI because with a c label this AI can be marketed without any further restrictions in all
27 member states it's like any other product like a car or a medical device once you have done your Conformity assessment there's no further restriction if you want to go to another market and therefore of course there is a Expos Market surveillance to make sure that this product stays safe so there are on the national level uh Market surveillance authorities who which will watch they have a coordination and information exchange mechanism so that we make sure that this internal Market stays safe it's not just a formal exercise you do at the beginning but there is
a certain Market Sur veillance but the advantage is once you have the label you can market and use the AI safely all over Europe okay thank you uh it's very clear um so uh now it's time to take some few question from the audience and we have a first question uh about um the gdpr about the GPR and the interaction with the AI Act and the question is what about I read it how do you plan to make it clear for business to implement the EU AI act regarding I guess it was uh on the
question uh the gdpr uh implementation so wants to to answer this difficult question I I know but it's important Laura maybe take it yes thank you very much indeed so I've mentioned before the gdpr and the AI act they will apply in parallel and they have in principle been designed to be complementary we will clarify this interplay by providing practical guidance we're working very closely um with the responsible um institutions for for data protection to make sure um that we cover questions um and and really clarify the interplay in Practical terms for companies okay do
you want to add something about the gdpr now uh we have a second question about the the application of the AI act for specific businesses uh do you have a plan uh to to what are the practical steps uh for for specific application for for business about the AI act I guess the question was about whether you you you you are involved in healthare or Finance or in other specific sector do you plan to do something for a specific sector and uh for we as as we as I said the I act is a horizontal
act so it touches on a lot of different areas and we deal basically with high risk that is one of the key features and high risk in our context means very often as well highly regulated areas so we understand that there's a certain complexity because where we look at Finance or look at medical devices these are areas where there is of course already some regulation therefore it's very important that the interplay between the AI Act and the sectorial legislation be it Finance Healthcare whatever Machinery regulation is really insured this is nothing new in the new
legislative framework this is very important to understand because if you today have a toy with a Bluetooth function you have to comply for instance with the toys directive and the radio equipment directive so this is something operators are used to do but of course we from our side have to make sure that this works very well so in the future operators will have to make a risk mapping in the beginning and then they have to see what the which sectorial legislation and AI act they have to B Supply but we want to set up groups
with the member states we're working on this we will already have now meeting subgroups under the AI board which is our steering board so to say but we want in detail to look at these sectorial legislations and there we will provide even more practical guidance so for instance if you have a robot with a I how this in practice be organized uh that the Machinery regulation and the AI are applicable but the good news is which I would like to convey to the companies today is we will make sure that there will only be for
instance one Conformity assessment our objective is that the Conformity assessment for instance for the robot in the future should as well include then the AI part we will really will make pay a lot of attention to the fact that we avoid duplication of procedures okay thank you uh we talk about the gdpr uh the new legis from Rock but there is a last question on slido about fundamental rights and the question was um uh will the AI act uh require a higher protection level for fundamental rights or how how does the the the ACT relates
with fundamental rights if I can take that um the AI act in a way enables us to enforce fundamental rights so the fundamental rights are on top of our value pyramid um but for example if I'm applying for a job and I'm not taken I don't know whether I have been discriminated or whether there are better candidates but the way this kind of AI would work and would be constructed with the AI act would be that it would be relatively easy to find out what the problem was and so the AI act really helps the
enfor for ment of fundamental rights to see in that case whether I have been discriminated okay thank you uh there is no more question from slido uh on this part um and so let's turn to the second part of this webinar um um on the risk-based approach uh you you you just mentioned the Pyramid of risk and we we we we'll talk about about that um the the risk-based approach is one of the main distinctive feature of the AI act um marking a significant uh departure from the previous one siiz fit all uh model of
Regulation so let's delve into this uh risk-based approach um to start um the AI act um is based on the Pyramid of risk so I I want to to to we we we often present the this Pyramid has a four level pyramid of risks uh but also during the negotiation the release of CH GPT um uh has transformed the way we we conceptualize this pyramid so my first question about the risk based approach is for you kilan could you comment this pyramid there is a slide on the screen with the pyramid could could you comment
this uh this pyramid in a general way please well with a lot of pleasure the underlying idea of the pyramid and that is perhaps the key message is that we want on AI only to regulate as much and as far as this is um required by the risk that is the key idea which is underlying this pyramid where there is no risk or limited risk we only want to intervene very very lightly and this is I think the biggest point you need to retain from this seminar perhaps if you look at the pyramid what you
see is the the four colors and there's always I have to say this one little thing which is of course not correct which we have just done for the purpose of presentation because the Green Layer basically the layer where you have nearly minimal risk is much bigger in reality in our estimate these are 80 85% of the AI systems correct me if I'm wrong but the the the green part uh the no no rule uh no risk uh is not mentioned in the text it's not really mentioned text the only thing we have for no
rules no risk or minimal risk because there's nothing in life without any risk of course uh is voluntary codes of conduct it's but these are as the name says voluntary so for this and here are the examples why if you look at your smartphone your app organizing your photos or your app suggesting you the music in the morning when you want to do go for your run all these things are will not be regulated in the future first thing perhaps to retain secondly if you go up a bit this pyramid we see U and here
you see that's why it's nice pyramid because this layers get smaller just to be clear if you're not on the high risk or unacceptable risk or limited risk you can do whatever you want you have to respect the gdpr you have to respect of course but regarding the I act you can do uh you there is no regulation for all these use cases do you have some example of use cases uh uh no no risk with no risk yeah absolutely I mean the first if you want to check as a provider if you're covered you
have to ask yourself two questions first is what I'm doing with the AI we we come to this and second is it minimal risk and then you're out and you you just have to respect of course normal rules I mentioned already two examples but there are other things like predictive maintenance what you have in in Factory so lots of applications which you use in logistics uh will not be covered so if you go move away a bit from from consumer used AI but go more to let's say business used AI a lot of things which
basically prepare data help you to organize your business that may be AI yes but it has no effect on fundamental rights there's no effect on safety you can continue to do this there will not be uh an intervention from our side thank you if you then go up we have the yellow layer here's the transparency risk here's important to retain here we do not ask you basically to change your system we don't have quality requirements on your system but what we ask you to do the ACT ask you to do is um to be transparent
that you use Ai and this is meant for instance if you use a chatbot that uh that people know I'm not speaking to a person I'm speaking to a chatbot but it means as well if you synthetically generate content like um deep f in text in audio and in video you have to label them and you have to label as well text of public interest and that's quite important in the future you have to in a machine readable way to introduce tools so that you can detect whether synthetic generated content text video audio has been
synthetically generated this should help us in the future to distinguish between fake and real can we say that this layer is a horizontal applicable to all use cases it's well in a way it's it's applicable to to all generative AI if you wish all AI which basically produces some kind of content uh which is either text video or audio there you have these obligations and we think that these are guard rails for Digital Society because as well in the future we need to be able to distinguish what is men or woman made and what is
synthetically generated okay so let's let's move on to the other and then we come to the orange layer which is uh a bit the core of the regulation because laa mentioned very well all the different chapters of the regulations so most of the provisions concern this these are the high-risk AI systems and these are uh highrisk is a bit perhap sometimes misleading because it may sound don't do it it's very risky but it we have that's why that we have the regulation we have a lot of high-risk products the car is a high-risk product as
well we could call it as well a high impact product here we have qualitative requirements this is where you do the exano Conformity assessment what I spoke about that you have to check that the product is safe where you have to control there exposed Market surveillance but the message is you can do it because now it will be checked and it will have received the label and therefore it's safe and this is a good thing all AI in Europe be it lowrisk medium risk high risk with the c label in the future is safe because
where it's needed we will have made sure that the safety requirements have been checked and here you have this two main categories you have ai is self standing like in recoupment system or admission to University but you have as well AI for instance in a product like in a robot and then on top of the pyramid yes just before we move to the to the top of the pyramid uh could you explain the different use cases from the article 6 6.1 and 6.2 uh about irisk system there is two different irisk system I'm sure it's
not clear for everyone because it's it's complex so it's a point of of Interest I agree it's a bit it's complex but it's important to get it right and not to overdue and that's what we what we tried we have in six1 we have two main categories one is you are highrisk syst if you're a safety component of a product which is subject to a third party Conformity assessment and the idea here being that if you are subject to a third party Conformity assessment the sectorial legislator has considered you to be um this product to
be particularly risky and therefore has sub subjected it to strict rules example would be a robot for instance interacting with a human would be a high-risk machine device then the AI subject to a third party Conformity assessment the AI steering that robot would in the future be subject here to our requirements that shows as well that there's always one Conformity assessment because you have in any way for the robot you have would have to do an exontic Conformity assessment but now you have clear criteria as well for the AI for the self-standing AI so for
the one in uh in let's say in the social more social related systems uh we have basically created categories in anex 3 we have eight categories uh and we have a whole article um where we describe on which basis we have identified these these categories and these range from from Biometrics to education to labor to law enforcement and these are let's say areas which are particularly sensitive to fundamental rights because very often if you look at these areas you have a relationship of subordination you have in law enforcement or border controls people are vulnerable and
therefore they're exposed to Public Power and therefore Public Power May and should use AI we are not at all against this on the contrary but this must be good ai ai which is trustworthy and checked and there we will come in with our requirements okay um thank you kilan can you explain uh what are the relationship between this different level of the of the pyramid the relationship is not mutually exclusive that's something perhaps we need to explain uh except of course the prohibitions which have not yet touched on prohibitions are obviously prohibited and therefore you
shouldn't do it but you may have an overlap between highrisk and transparency so it could be for instance that you use a generative AI in education uh in that case it would be high risk because it's education it's a in the school again students are in a situation of to a certain extent of subordination they're dependent on marks or grades and it can be generative AI creating a certain content then you would have to label it so these things can go together okay and there is another category of AI system regulated uh is the general
purpose AI system and how how this category that not fit exactly in the pyramid um will relates with the the the the regulated level of the pyramid high risk or transparency well general purpose AI systems have of course this distinct distinctive feature that they're General so they just do not have one purpose but they have a multitude of purposes nevertheless the rules which we have apply it means they shouldn't do something which is prohibited that would be of course prohibited if they are used for a highrisk purpose need to be uh exan Conformity assessed example
if you use uh one of the the well-known generative AI um systems for instance for recruitment purposes to rank your application files it would become high risk and then you would have to do exan Conformity assessment and of course they're subject to the transparency rules because very often these systems are uh designed to create content and then this content should be labeled and they would need to um to include Tech techical tools to detect this content has been synthetically generated so the the normal rules would apply we would need to see what these systems are
doing or for which purpose they are used okay thank you K so now let's turn to another topic is the timeline of the application of the text and so this is a question for you Arena um the the the text has enter into force on last 1 of August but the the different provision of the text will apply on on a Time base uh uh from now to 2027 uh so uh we have a slide on the screen of the timeline and could you enena tell us about the application of the text when the different
provision will apply thank you now the first thing to apply is if you want the first chapter so this includes the definition this includes also literacy but in particular this includes the prohibitions and that will apply from February next year so um two more months to go um then the prohibitions and some general Provisions will apply um then we have the rules on general purpose AI Kean was just talking about um they will apply from August next year and by then we're going to have the court of practice and then we are have the different
layers of high-risk AR kilan was um talking about so we have all the what we call anex 3 AI systems and let me clear everything which is high risk is explicitly mentioned either in anex 3 or in anex one so everything which is in anex 3 which are the self-standing AI systems they will be applied from August 26 and then we have the ones which are embedded so the car and the Robert kilan was talking about this will be August 2027 also for the reason that there really the AI requirements will have to be integrated
into the existing um Conformity assessments so to get this in place we thought would take a bit more time okay thank you Arena and the release of CH GPT in November 2022 shows us that that thing are are changing so so kilan is is the the the um how the AI act will remain up to date if there is changes in the use cases some use cases we will not consider as armful anyal or new use cases will Race So how how the text will stay up to date well to make the text future proof
was a big Challenge and this is I think a common challenge for digital legislation because we we regulate a bit um with certain unknowns because the market and the products change so quickly and develop so quickly that nobody can really pretend to know exactly how um which kind of products with with with which risks will be on the market in one two three years down the line we try to um nevertheless to be clear because that's as well very important that we have clear rules but to have a bit of room for development of these
rules because otherwise the rules may be like a straight check it they may be too strict not to take up new developments therefore we have for instance taken the new legislative framework approach with the standards so the standards can be further developed a very important element we have allowed for a number of um opening elements in the act for instance with delegated delegations and implementation powers for the commission so that for instance this risk of high-risk use cases we will have to adapt every year the regulation for sees that every year we will um come
up with a report look at the high risk here we workers Serv International organizations like the OCD and we will update our list uh we have inen mentioned this we have developed a code of practice for the general purpose AI it's a co-regulatory approach as well something we can adapt so we they may be followup code so we can we will try this to be to do this in dialogue with stakeholders and with the industry but we will need to develop this a little bit further in order to see what what comes up and these
elements all together should help us to make it future proof okay thank you Kanan and Nina there is an ongoing consultation uh launched by the commission uh on the 13 of November 2024 uh Ina can you can you tell us more about this consultation Yes actually there are two consultations ongoing on I just said that certain rules will be applicable from uh February so um and there we will provide guidance um just to clarify what exactly um will be applicable and the two consultations we have launched are on the definition of AI indeed this is
the condition for something falling under the AI act and also on the prohibitions um the consultation will be open until um 11th of December and basically what we are doing for the prohibitions we are um saying okay this is the prohibition this are the these are the different legal Concepts and we are asking whether anything of this anything of what is written in the prohibition um people find need specific clarifications um and then we ask for use cases like whether they have used cases which clearly fall under the prohibition but also and this is very
important to us um whether there are use cases where um they feel it needs to be clarified whether it falls under the um prohibition and all this will go then into the guidance where in particular we want to give many practical examples what is in what is out okay thank you um highrisk system are at the core of the AI act and Laura could you sum up uh the requirements uh regarding I system please yes indeed um we introduce obligations both for providers of high-risk AI systems and for the deployers providers are those who are
developing them putting them on the market and deployers are those who um use them under their own responsibility so to give you very concrete example provider um could be a developer of an AI solution that evaluates um CVS a software tool sells it and then you have the deployer a company that buys this and uses it in their HR department um the obligations or the the core of the obligations lie with the provider Kean explained this earlier they this follows with the logic of the product um legislation approach um they bear the most responsibility because
they are the ones to sell it on the market they have influence on how the technology is developed so what they have to do in terms of obligations is to ensure the high-risk AI system meets certain technical requirements there will be another session where these will be explained in detail but to mention a few related to data governance human oversight accuracy robustness these requirements um they are set out in the AI act but they will be operationalized through the harmonized standards which we have touched upon this is an ongoing process um right now the standardization
organizations are developing these standards and then hopefully in the future providers can follow these standards and trust that what they do is in Conformity with the AI act other obligations they have is to um put in place risk and quality management throughout the life cycle um they must register the system in a central EU high-risk database and they also have uh further obligations that relate to monitoring for incidents possible reporting cooperation with authorities so that's for the provider for deployers um it's a bit easier primarily they have to use the high-risk AI system in accordance
with the instructions of use make sure that the staff that operate um the high-risk AI system are well trained to exercise human oversight and they have to use representative data key blocks for certain deployers they may be additional obligations namely for the public sector because AI high-risk AI systems used in in this field are particularly sensitive we often have a subordination relation so they must in addition before they deploy this high-risk a system for the first time carry out of fundamental rights impact assessment and the public sector on top of that when they deploy highis
a systems they also need to register them in the EU database this doesn't apply to all deployers just for the public sector so that's a bit the overview and I think in the next seminar you'll Deep dive into the requirements for high-risk AI systems yeah sure but just a last question about IR risk system the the AI Act is not is not complete regarding all the requirements for for the IR risk system there is an ongoing uh building of of requirements uh in the s with the S regarding standardization we we talk later about standardization
but could you just tell us more about the standardization and the ongoing process of talization regarding Iris system certainly just to clarify the AI Act is complete so it sets out everything that will apply in the future but what the standardization does is to operationalize really detail out how should this be fulfilled so um you have an obligation to produce or to to develop an um accurate high-risk AI system but what does it mean in Practical terms what kind of steps do you have to take and how accurate is accurate this is what is going
to be detailed out in these standards um we have mandated as the European commission the European standardization organizations sen and selc uh already in Spring last year uh why already so early when the AI Act was only adopted now is because this is a process that involves many stakeholders takes a lot of time with now um updating this mandate to reflect the final legal text and um the standardizations organizations will then in the course of next year deliver uh the standards we as the European commission will make an assessment if they correspond well to the
legal text and um endorse them so that then in the future providers can follow them and really to emphasize they are intended as a tool that helps provides legal certainty and um um practical guidance but they also help to keep the AI act future proof because they can be adopted and they ensure that the AI Act is workable because it's a co-regulatory process by which they are um put in place okay thank you so now let's turn to the to our second Q&A session um the first question I have on on on the on slido
it's about the uh the the the the prohibited AI uh system uh the the the ACT enter into force on next February 2025 and the question was about how can films start to prepare for the for the prohibition uh in February 2025 how can organizations can now start to prepare um they they can check indeed Article Five and um and if they have a doubt on whether this is something to be prohibited indeed they can always um drop us an email um so no doubt about it um we will provide the guidance before February that's
clear but um in the meantime if there are doubts um we are happy to help there are others who are happy to help um important is that if it when it's prohibited that they stop using the system so in that respect probably it's relatively easy to have a switch off button I think it is very important it is in the philosophy of the AI PCT organization can send you an email you said to to ask you whether this AI system this use case is falling under the prohibited system or not yes yes and you can
answer uh yes or no or or maybe or no indeed you're not lawyer you're not uh we are yeah yeah yes no we we um indeed sometimes we will also have to discuss the cases internally if now after this event we're going to get 5,000 emails it might indeed take a bit longer um but um normally yes um we'd be happy to help and this is also actually what is included exactly in our public consultation which terminates on the 11th of December the question to clarify whether a certain use case falls under the prohibitions or
no okay and the second question is about the the cost of the Conformity assessments uh what will uh be the cost of the Conformity assessment for high risk well we have looked at this of course this is now a little bit outdated already because we are some years down the line but we have looked at this very carefully in the um impact assessment so if you want to look for figures you can you can look there you will find some indications um again we think um what we what the key point is that we try
to align this as much as possible with existing um procedures what I said already is for AI embedded in product you will the AI will only require an exano Conformity assessment by a third party if the basic device where you put in the AI requires this as well so it will be part of the overall Conformity assessment and these companies will have already uh a quality management and risk management schemes because this is normally part for instance a medic devices of um of com of diligent companies it may be a bit more demanding for self-
sending AI systems that we realize because here it's the first time these things are really regulated here nevertheless we foresee um a self assessment with the exception of Biometrics and the self assessment should as well help you and facilitate uh the Conformity so it comes with certain costs that's true and we have last but not least we offer the sand boxes in each member state which would help in particular small companies because one message I would like to get across is nobody should stop in Europe to do and to develop Ai and to build put
AI into their product we will try with our tools with the sandboxes with the Innovation hubs with other tools to help and we seriously do not want to send the message that anybody should be um deterred from using AI could you tell us more about this regulatory Sun boxes it's not well know from all the stakeholders so can you elaborate on uh this point please no it's very it's very important because we want it always in the regul ation to have this balance between regulation and Innovation and in particular for smaller companies of course every
regulation comes with a certain burden whatever we do but this burden can be heavy because they're smaller and they have to go to the market very quickly and therefore we offer this tool um the sand boxes um which are part of what we call our ecosystem of support so we have as a commission we have set up this system of more than 200 Innovation hubs which really provide practical support and we have testing impementation facilities and then we have as well as sent boxes so the contact point will be the Innovation hubs for the companies
and if they then go to the send box they will first get qualified legal advice exactly what inen was now pointing at if you have a question if you don't know and you do not want to consult a law firm or a consultant you can go then to the sandbox and they will try to help you if you need more then you may be admitted really to the sent boox and there for instance you will under the um supervision of the competent authority to be able to test your product so you have what we call
a safe space you will have a safe space where you are protect it uh and you get guided and there you can prepare your product for the uh bringing it to the market and for all the requirements for the under Conformity assessment so when you go out you get an exit report and that could be a tool for you to show then compliance Visa V uh the authorities when you have them to you do your Conformity assessment so it would really take you by the hand and prepare you so that you can with minimal hurdle
your minimum hurdle you can afterwards carry out the ex anter Conformity assessment okay okay thank you and we have a last question uh about uh predictive predictive policing I'm not sure what what it exactly mean but uh some country use AI for predictive policing it's a sensitive area question so will this fall under the acceptable or non-acceptable risk the answer is clear it depends depends so um if um you now take the um the scenario of Minority Report the movie and you just run a system on the whole of the population and you will tell
people like oh this person tomorrow is going to commit a murder this indeed would be prohibited so any predictive policing which is running without any facts without any indications just in the spirit of mass surveillance would be prohibited um but certain tool for profiling and for um for actually helping the police to get facts together would be high risk which means that they would be allowed subject to the requirements okay thank you there there is no more question on the slide so let's move on to the last uh um topic governance and enforcement um this
is obviously an important Point uh the act builds a governance framework and um the the first question about this framework um relates to the AI office this is a question for kilan uh Could you um how would you present uh the AI office uh I mean in a in a broad way General way please well the the AI office is um has been set up by a commission decision and it's part of the commission but it has of course a bit its own structure and the underlying idea is that the commission will bundle all its
knowledge all its activities related to AI within this office in order to ensure um a coordinated and a uniform approach Visa Ai and this underlines as well for us uh this is perhaps one thing to to to get across the importance of ai ai for the commission is of atmost importance we want to to strengthen we want to make Europe a continent of AI and therefore we want to put our efforts together reinforce them and put them together so we are growing we we started with basically 60 people and we are now at 80 something
and we want to grow to 140 people with five different units the second thing what we wanted to do here in the AI office is that we want of course to be a center of competence and to ensure a high quality regulation and compliance but we do not only want to be a regulator that's what you see here on the slid the C60 degree Vision we want that we look at both in the office at the innovation part the support part as well as the regulation for us it is not a success if in the
end everything in Europe is safe but there is no AI to put it very blunty for us it's a success if you have a lot of AI in Europe but AI following our values being aligned to our framework and that's what is reflected in this structure with Lucy sholi which we heard in the beginning as a head of the AI office my unit here AI regulation and compliance basically dealing with the implementation of the AI act a UI safety unit where we have really um brilliant experts on um on AI technology technology experts which would
bring us like an AI safety Institute to the top level topnotch science we have a unit on AI in excellence so that helps us to that develop support project under Horizon Europe and digital Europe AI for societal good as well looking in particular at the international Dimension as well as the dimension of the global South what can we do to to not to make AI dividing uh issue and then very important the AI Innovation and policy coordination which car out this function which I described ensuring an uniform approach and putting our efforts together to support
AI so that we really achieve this strength in AI which we need and then we have two advisors as well very important the lead Scientific Advisor which should be an outstanding scientist and an adviser for international Affairs so that's a bit the setup and you here you see the three objectives to keep the regulation to have Foster Innovation and research and to contribute to International Development because we could should not forget we don't do this in isolation we do this with our partners with like-minded partners and we want as well as much as possible to
align and we want as well to persuade others to follow our model okay thank you um how many people work here uh it's changing by the day so uh I would I need every day to check because we are growing and onboarding uh we have about 80 85 I would say today but our objective is and that I think is realistic given the size of the task is to be at something like 140 by the end of next year okay thank you for this very interesting Insight uh so um uh the the European AI board
is is different from the AI office and now there was a question for Laura about the European AI board uh is that uh the structure is at the center of the governance of AI um how could you present the European AI board please the European AI board is our key advisor and counterpart um and by our I mean the AI offices it's a member states group it's set up um by the AI act it comes together to advise on decisions we take on AI Innovation policy on the implementation of the AI act and also to
discuss our strategy for the international corporation on AI um it has a very important role in the governance because um this is something we haven't talked about yet but a large part of the oversight and enforcement will take place on National level by uh Market surveillance authorities in all EU member states and to make the AI act a success and to ensure it provides legal certainty and creates an internal Market it's important to coordinate well the approach and ensure coherent implementation so this is one of the key roles of the AI abort in relation to
the AI act bringing everyone together discussing um the harmonized approach to interpreting and applying the AI act um the way it's set up is that uh it's a highlevel group of high level representative from the member states but there are um dedicated thematic subgroups where experts come together where the different authorities in the future will come together once they have been established and are in place um the AI board is up and running um it's had already its first meetings um the next meeting is just around the corner on 10th of December uh very operational
and um indeed a key platform for us to keep in touch with the EU member states and ensure coordinated EU approach on AI okay thank you and in addition with the AI um board each member states will have his own regulatory body um this local approach allows uh to to uh to to take uh into account uh the context of each member State and each industry so um um how will the AI office will collaborate with all these uh National regulatory bodies um there there are two Dimensions first the AI office is the key coordinator
for the coherent application of the AI act this means we organize for example the AI board we bring every together um we provide guidance to ensure the consistent application um we have functions such as uh putting in place information exchange systems so that's the coordinative role and that also applies to coordinating the good cooperation between all of these uh authorities in the EU member states um so we we will cooperate with them closely in in this regard but the second role that we have is we will also enforce part of the AI act namely the
rules that apply to Providers of general purpose AI models the large powerful AI models uh here the oversight is centralized at EU level because there are only a handful of these models out there it's a very novel field it made sense to um have one institution dealing with the oversight for for these models um but there's of course interlinkages when you have such models they might be integrated into AI systems that are within the remage of what the member states oversee for examp example high-risk applications um so the AI act foresees certain um coordination obligations
information exchange channels and we are right now in the process of setting them up in close cooperation with the member states via the AI board okay okay thank you maybe we we we can go deeper in this question with the slido question after but now let's move on to the compliance mechanism topic which is very important because uh a is not soft CL as we all know so um um K what will happen if a company an organization violat the the the the the the AI act concretly uh when when when you start to to
to to control the application I'm sure everybody want to to know uh the this this point well it's no you're fully right it's not soft law this is a hard law if you wish of course Our intention is not to punish and to to sanction that's why we have as well the AI PCT we would like to get everybody on board we're really interested in in engaging in helping in supporting but um of course if there is a non-compliance there will be a followup and this will be as lav pointed out for the highrisk systems
or the prohibition this will be the national authorities doing this the market surveyance authorities who carry out the um expost Market surveillance and they have all the powers from the market surveillance regulation because this the market surveillance regulation is fully applicable and this goes to starts with information request to Providers uh request to modify their systems or to repeat for instance steps in the Conformity assessment um and it can in the Ultima Rao it can even lead to withdrawal of the system from the market or even be combined with with fines we the AI office
we are responsible for the enforcement of the rules on general purpose AI model so these very big models which you which you all certainly know and uh like for instance GPT or claw or gam I don't want to promote now any just to give you a flare what I'm talking about and here we have as well powers under the AI act so we can ask for information we can ask for evaluations uh we can carry out evaluations external evaluations and we can Ultima Ru we can as well ask to take the model down and we
can impose fines so there is a whole package is foreseen I don't know not to be too too shocking the our idea is not to start with with fines our idea is to start with a dialogue but if somebody does not comply uh we will make sure that the rules are enforced okay thank you do you want to add something uh regarding the the application or we we we we have enough time to uh to elaborate on this I want to add a point because uh you'd mentioned um that there would be one supervisor per
member state um this will not necessarily be the case we spoke about earlier that um some AI systems will be deemed high risk because they are integrated into already regulated products for these the responsible supervisors remain responsible so in the field of medical devices the national medical device supervisory Authority remains responsible there will however be one single point of contact for citizens who wish to launch complaints for the coroporation on EU level within each member state but um the supervisory system will include more um authorities per member State okay thank you so now let's turn
to the last Q&A session we we have time for for some question and the first question is about the relationship between the AI office and National regulatory bodies um um uh how will the National competent Authority uh member set be uh I think this question the reason we are seeing a lack of national support structures I'm not sure it's right but but uh how will the the the what are the relationship between the AI office and National structure maybe the question was when National structure lack uh do not have the the the answers can you
help them uh or or no they do have the right to ask for our support and for information in particular when it comes to um checking compliance of AI systems that integrate AI models but we also foresee further channels in which national authorities can ask for our support we do realize that for uh smaller authorities this new technology that kind of expertise that is needed can be challenging um we are already taking steps now for example uh we we've found um an action that's called Union AI testing facilities that market surveillance authorities in the future
can access to have um support when it comes to testing if they don't have the technical means themselves um one Advisory Group we are also setting up um the scientific panel group of independent experts who give technical advice and support the enforcement this will also provide support to the National authorities so we are thinking about ways to make sure every everyone has the means to carry out this responsibility uh also in smaller member states and we have even just to complement we have an article 75 foreen for instance that the uh if the member states
deal with systems which are based on large models and they should cooperate with the with the AI office and the AI office should support them and we organize of course this cooperation between the national um competent authorties via the AI board what laa described in the beginning so we we will make sure that everybody is on the same footing and the AI office will as well be a support structure so we will provide guidance we will provide support Tools in order to make sure that everybody if competent authorities fee we have not we are not
sure how to deal with certain issues we will try to provide their input okay thank you um we have a second question maybe for you Arena regarding the scope of the AI act maybe it's it's a more a political question uh why uh military the defense uh military uh use cases are out of the scope of the AI act um this is a very good question actually um when we started discussing the AI act defense issues AI in defense was discussed at an international level in Geneva and it was subject to negotiations there so that
was initially the reason for us saying okay this is indeed a Dom main where it's much more important to have an international consensus um the world has changed a bit since then um and now probably it would have been difficult to included because you know that also National Security is excluded from the scope of the treaty so from all our European activities but indeed also from the AI act now naal security is a concept um um that you cannot claim everything is National Security and you can also not claim everything is defense but um when
it comes to the defense where there is also confidentiality and everything it is excluded just one word we also have dual use products so for example certain drones or other things that can be used for military purposes but also for civilian purposes and for those products as long as they're not exclusively destinated at the military um they indeed are subject to the AI act okay thank you um there is another question uh important question uh regarding the definition of deployer and provider um could you uh explain what is the definition of provider and deployer U
um I'm sure it's not clear for for everyone maybe lower or I'm happy to take it so the provider is the natural or legal person so the company uh the organization who puts an AI system or general purpose AI model on the market under their own name or trade name they don't necessarily need to be the one who have developed the AI system but they are the ones marketing it and that makes them the provider of the uh AI system or general purpose AI model the deployer is um the in institution legal or natural person
so normally company and Authority another organization who buys such a uh AI system and uses it under their own authority so I gave an example earlier maybe to give another um one to illustrate this uh really well um suppose we have um an AI system that um um in the private uh sector that um evaluates CVS or that deals with incoming applications the ones who are developing it are the ones uh the ones who are marketing it they are the provider the company using it making it available to their employees to their HR department that
company is the deployer in the legal sense okay thank you and there is another question about the competitiveness of Europe and the AI ACT people um are sometimes fear sometimes that the I will will stop Innovation or so so what what what can you say about this to to uh to to in order that the AI can not fear the people uh the application will be not too loud and to to to stent so uh how can you say can you respond to this question well I think uh we mentioned already a number of elements
in the AI like the support for Innovation the need for clarity I think it's rather a clear-cut set of rules but beyond this I think more important is to understand that the AI Act is in the first place an internal Market regulation so we that's where I started we want to create an internal market for trustworthy Ai and we think this is quite important to understand so there will not be any more National legislation on this that's the first good news you will not have 27 different legal Frameworks so when you move and you develop
an AI in Belgium and you want to bring it through France you don't need to do any adaptations the AI with a c label can be promoted there I think that is very very important and second we have to look at this AI what we consider as we clarify there's a lot of AI which will not be regulated at all we discussed this in the beginning but where we regulate there is really a very sensitive risk and the problem with technology is if you get it wrong and we had cases as well in Europe where
I got it wrong in these sensitive areas be it in for instance law enforcement when uh police uses it or be it when judges uses it me these are very sensitive the effect of mistakes or failures would be a very deterrent effect I'm sure that the uptake of AI would go down if you would not have this kind of trustworthiness public authorities will not engage and we see there is there is a lot of need because all our administrations are overwhelmed this task we have all all member states have budgetary constraints we need to get
more digital more agile so AI can offer a lot but then of course to use it and to create this uptick so we think here we will create a market we will create a market because everybody can rely now on this AI without fearing a reputational damage and one message I would like to pass for small companies is that the AI will allow them as well to compete with established providers because we will have now a quality standard in the end what the AI act does is a quality standard so small companies offering a service
to the police which would normally not really be recognized because the police may be too scared to say some to take something sensitive from an unknown provider will have the chance now to compete with others because they can say my system is safe it's trustworthy I've checked it you can rely on this so This creates a Level Playing Field and the market and in that sense we hope I think in Europe we quite often we are not we don't have the cheapest product but we have the best products or very good products not to to
overdo and I think in this respect AI can help of course if we do it together with the with the business and if we we try to be clear and pragmatic in the in the setup of the rules okay thank you Kean and there is a question about the public administration uh will the AI act um apply in the same way for commercial uh firm and uh public administration yes it will yes it doesn't uh treat them differently now of course if you look for example um at the highrisk use cases in um in Annex
3 so meaning those in the sensitive area self sending AI systems the majority of them treat uh AI That's handled by public administration for example that's used to calculate social benefits or that's used by judges to prepare rulings um but in principle There's no distinction between whether an AI system has been provided um or is deployed by a public institution or um or a private one I may add um so the requirements are the same but indeed in certain cases and that includes AI systems being used in the public administration there is the obligation to
have a fundamental rights impact assessment before deploying before using the okay thank you um there is another question about AI literacy um this is an important question because AR literacy obligation from Article 4 will apply from February 2025 so tomorrow morning um so how does the the the uh organization can start preparing AI literacy uh uh compliance who wants to answer well the um AI literacy is key and article four indeed prescribes that uh um providers and deployers should try to train their their stuff that they are able to to carry this out and this
is of course a key provision because we have for instance the requirement in the AI act for human oversight so there should always be a human controlling this and we have transparency requirements and these are only meaningful if the employees La just mentioned that often basically the companies or the institutions will be provider or the employer if the employees carrying out and working with the AI have a have a decent level of understanding because otherwise they will not be able to do this so it's it's a key provision um this is of course we can
only part and it's it's it's it links very well to what we do today because it's what part one of the three commitments pledges for all members of the P to to already engage so I can only invite everybody to start to work on this we will have as well for those who have pledged another webinar on on AI literacy so we will try to give you more indications how this could look like but um it's I think it's very important that companies start to reflect on how they can bring their their stuff to um
to a decent level of of knowledge the AI Act is here not very prescriptive that's true because we basically say it basically says you have to do this so we will not try to be over descriptive but of course we will try to give you assistance and to guide you a bit what what is really expected and I think it's in as well in your interest and we and that's I would like to say as well from the commission we have identified Beyond Article 4 because it's not on us Article 4 Beyond Article 4 we
have identified skills as a key topic so we will when we um come out now with our strategy on AI we will you will see further actions what we want to undertake on skills because we think to upscale and rescale people uh in companies but as well outside of companies on AI is one of the key things we need to do okay thank you and there is a a main topic we didn't mention yet uh it's about uh environmental consequences of AI um what can we say about this important topic of uh energy consumption uh
by AI model and environmental consequences uh who wants to to to say a word about this difficult question on environmental maybe enena or you can answer all of you no I can I can start answering on this and um I'm sure that um colleagues have to have something to contribute um indeed that is that is a key question and it was also a question that we had during the negotiations um so we know that there is a lot of impact on energy consumption but at the time when we negotiated the AI act it was hardly
possible to measure it which means that I was talking before about the highlevel expert group and indeed one of the requirements the highlevel expert group had suggested was the environmental well-being so this was something we discussed how to put it into the AI act but we found it basically not really possible to put this in a legal measurable way but now we are doing plenty now we are having plenty of activity is a to find the benchmarks to find the parameter but B also in the AI act there are quite a few Provisions where the
environmental impact needs to be taken into account maybe to com in in indeed um this question of how to document and how to actually measure and track it's something that we are also addressing when it comes to the rules for general purpose AI models because these general purpose AI models of large language models are those for um which the impact of the training and also the operation later on um the environmental impact energy cost is the highest and so we're starting here as a starting point um providers of general purpose AI models are asked to
track um how how much energy consumption is involved in the training and operation of the model and then going forward we will be working and we are asking also the European standardization organizations to look into this into uh finding um methodologies that allow to have comparable documentation and then have this as a as a solid basis for next steps of action but the AI act indeed has as one of its secondary objectives the protection of um of the environment and I think this is something on which we are all pulling on the same string no
policy makers um Society but also the companies because the current large energy cost something that is also burdensome on the companies so looking for ways to decrease um them is something which I believe is in everyone's best interest and there's an interesting example that here because you mentioned before the adaptability here for instance you you see a delegated act so in the end once we have the methodology we can put this into into delegated act and then the the legislation can follow the development so because you ask this is a concrete example how we see
this could work that we will adapt to Future developments okay thank you for your answers so we are reaching the end of this webinar uh this is only the first part of of a two-part series the next webinar will take place on the 17th of December uh the subject will be uh first on General propose AI model standardization and obligation applicable from February 2025 AI literacy but also other obligation uh there is a survey link um for your satisfaction so please fulfill it uh I want to thanks all the AI uh all the AI pack
and thank you for your attention and I hope to see you on the next webinar thank you thanks a lot [Music] [Music]
Copyright © 2025. Made with ♥ in London by YTScribe.com