It’s safe to say that all this is going to be very transformative, but we know very little about what the future holds. Artificial intelligence is set to have a major impact on our lives in the years to come. Of course, it’s all in our imagination. Generally speaking, we know very little about what’s going on in this field. To solve this future problem. We imagine something very dystopian, something very inhuman, something run by computers or artificial intelligence, something beyond our control. But I think that’s a bit misleading, because we need a slightly better vision for the future. And it’s through a good vision that we can create a good vision of the future. The question of how AI will change society and how humans have autonomy and their own minds can make us learn and change. That we’ll retain what’s important to people, our feelings, our experiences, the things that will be needed to implement change. To do this, you need to understand AI. People often think that this field is beyond our understanding. Is this an inevitable process? You’re right. It’s true that we often feel anxiety or fear in the face of the unknown. It’s something deeply rooted in our humanity. But we have to think outside the box. These technologies are with us, and they’re changing us. We also convert this energy into two-way movement. So we need to think a little like Taoism to see the two-way connection. Everything is connected, and we are just one form of intelligence. And we must learn to live in harmony with this new intelligence. That’s the way to approach the problem. And you see these events as very significant changes. There is no area of human activity that is not affected by these changes. Even if it’s difficult, because we’re locked into these human bodies and we think too centrically. But I believe that AI could help us to think beyond human limits. That’s why I’m interested in AI translation, not just between human languages but also between non-human languages. Maybe one day we’ll be able to communicate across species to better understand the world around us.
AI to replace artists because we’ve reached a sort of hinge where AI will be self-transforming and advances will be sort of exponential. I think AI can do a lot of things. It can summarize a lot of things. It has the ability to create logical connections and collect a lot of data. I think there are still some things it doesn’t understand. There are things that can’t be replaced, such as human experience, our bodily experience and our memories, because AI now has a memory short enough to call itself things. We need to be more inventive. In this writing process, you have to get back to the essence of what is human, connect to what is emotional in order to use our perception of the world and our childhood memories, really tap into our humanity. It makes you more creative. If you use these features wisely. I’m aware that many people use these faculties excessively, which leads to less favorable results. But if I’ve understood you correctly, you’re saying that it also pushes us to think about what makes us human, things that can’t be replaced by computers. AI is both an interface and a mirror. AI opens up your view of yourself. It is not only a linguistic interface, but also an interface of different perceptions. As individuals, we absorb various sets of data from our past, our family and our environment. Therefore, we also need to think about how we can develop our own human interface in order to transform ourselves into better human beings. What interests me, in terms of the future and the things that come our way in the future, is to see how I can use AI to break boundaries and go beyond limits. AI could enable you to be a filmmaker or musician in the future. As a result, it represents a democratization of creative power and making it available to everyone. AI makes it possible for anyone to become a writer.
AI can do many things, but it doesn’t yet know how to improvise. In its creation, AI is very static and lacks a human touch. As a result, improvisation is somewhat lacking. It’s all about using your skills. Lately, a large number of people have been using ChatGPT or similar tools. What we’ve observed is that ChatGPT is now capable of generating content, answering questions, responding to instructions, following instructions that provide information and insights, and creating things that some considered creative writing. And I’m convinced that AI will become increasingly capable over the next ten years. The assumptions that lie ahead can be the subject of lively discussion. And it’s safe to say that AI will improve to some extent. But one thing we’re going to see is some very positive developments in different applications, applications that use these models, that use free software models, models fine-tuned to your own data, but applications that help you do something. In a few years’ time, you may see all those useful apps on your smartphone using AI to do better what they’re already doing today. These apps will simplify our lives. At the moment, it’s hard to pinpoint how this is all sorted out, but institutions like yours are starting to take this on. But it’s not just a technological discussion, because we know that these positive capabilities can also be disastrous when used to negatively influence our democratic processes, to manipulate people or to force children to do things they shouldn’t do. Therefore, this is not just a technological discussion, but rather a socio-political one. As users, we have the necessary skills, but governments have a responsibility to manage and steward this technological revolution so that the negative effects don’t triumph, and the benefits can really be enjoyed. Therefore, to ensure that these technologies work effectively, it is necessary to create an ethical framework.
Today, AI can do many things, some of which are very beneficial for the penetration of autonomous cars and the analysis of medical images, for example, in the design of new drugs. Make people more creative. ChatGPT is a good writing assistant that comes with text. As a result, it has the ability to convey creativity, video and music in these areas. However, it remains rather limited in these areas. Consequently, a system such as ChatGPT and other chatbots. Their ability to pass university exams is thought to be equal to human intelligence, but this is not true. These are pretty wild systems. Real-world data is used to train them and adapt them to given circumstances, but they can’t invent on their own, reason on their own, pan on their own, and they don’t really understand the physical world. Therefore, to a certain extent, depending on the definition of human intelligence. It’s safe to say that your house cat is smarter than the AI we see today. However, this will change, yes, it will change. So, we’re working on future AI systems capable of memory, remembering, reasoning, planning, using tools and interacting with the real world. This will lead to the creation of a completely new whole. Potential applications that could shake up the way we consume entertainment or the way we work today. We might worry that they’ll get out of hand because of the architecture in which they’re built, since today’s systems can’t be controlled. There are hallucinations… False information. If they are manipulated in a specific way, they are encouraged to perform tasks they were not used to doing. They will not participate in their training. Current systems cannot therefore be controlled. However, it would be dangerous to assume that current systems can reach human intelligence. I don’t think this current system can compete with human intelligence. A new strategy and structure for developing controllable AI that follows objectives. Therefore, they will be controllable if they have to follow these objectives. It would be a completely different path to arrive at a generation of AI that would not necessarily be threatening, but self-creating.
Governance, public decision-makers and international organizations will need to anticipate the understanding of this type of technology before it happens. In fact, what’s fascinating about the current discussion is that I think it’s fair to say that AI will evolve much faster in the years to come than our current institutions and infrastructures. This match between them will be very important. So, where is AI headed? That’s the main question. Therefore, it is essential to bring together all analytical skills on a global scale in order to understand the direction we need to take. Two very intriguing aspects. So, there’s an optimistic view and a more pessimistic one. AI stands out from other new technologies that will require the displacement of intrinsically very powerful players. AI can have an impact on the global economy, in that it can reduce waste and increase productivity. And so, in that case, it’s very empowering, and that’s something I find very exciting in a lot of sectors. Intelligence will be close to human intelligence. Generative models are currently being trained. The problems and hallucinations are known, and training based on more private, sectoral and scientific data will have a huge impact. It’s really fascinating. But training will be the major transition. All your personal information. That way, you’ll benefit. By the way, there will be a lot of misinformation, but your own misinformation. That’s the point. And it’s because we’re programmable that AI has the ability to change us, not because it’s smarter. Algorithms can affect people. It’s not generative AIs that will transform the planet, but rather individually-trained AIs that will make human beings less human. So it’s going to become an extension of us, a mirror of ourselves, rather than 30 interactions with society. Kids are already doing it. And if you think, well, society’s not going to change that quickly, institutions, governance, governments, it’s not going to change that quickly.
In fact, printing consisted of printing words that people could write by hand. That hasn’t changed. What has changed is that this type of printing table has become more widely available, faster, cheaper, and that’s the main technological transformation. It’s not really that these technologies are capable of doing something better than people. To get back to printing or the telegraph, all you have to do is wrap a letter and carry it on horseback. The transformation is on a grand scale. It’s important to understand that computers don’t need to be better than humans at facial recognition. Finally, humans are extremely good at recognizing facial features. I could hire someone to do government-wide facial recognition to recognize millions of people, but if I can do it automatically at scale because there are cameras everywhere, you lose the ability to be anonymous in public. That’s the big transformation. What will happen to all these language models? There have already been a lot of changes in terms of machine learning. And if that takes its course, I don’t see why it shouldn’t, because these are tools for decision-making and interaction with institutions and governments, so the cat version of these language models. The results may not be extraordinary if we don’t change things from the past. If we don’t take the trouble to understand the consequences, it’s just a small detail that will have major consequences. I’m an optimist, aren’t I? It’s true that you can’t be critical if you’re not optimistic. The printing press was very disruptive. We wouldn’t say it’s a bad thing per se now. Despite this, the transition had a considerable destabilizing impact, particularly in Europe, which eventually led to the creation of the European Union. Mastering rather than asking whether it’s good or bad. It’s true that it’s positive or negative, but for me, it’s not a very constructive debate, given the urgency of the situation. There is indeed a question of transformation, scale and speed in everything we do. At the moment, it’s just the big tech companies, and governments are trying to get in, but it’s difficult.
To get there without causing chaos and destruction. In short, innovation and governments must work together to succeed. People may wonder who will progress faster. While innovation happens faster, governments can stay at a reasonable distance if they are able to move faster. This will be a better option than the reverse in this case. Next, we need to determine what constitutes effective governance. And I think part of that starts with identifying the problems that people care about and want to solve. Governments are doing this. In addition, the EU is doing it with the AI directive, the privacy issue, threats to children, civil liberties and the UK. He’s interested in things like security. As a result, I think that in terms of security, the idea is that a model could be used in a way that is not related to its initial use, and that some parties could use AI in a potential way to harm other humans, such as by creating a biological weapon. Therefore, we will need to have a global governance framework. Looking at security and other areas, it’s true that we live in a world where there is some form of global governance, so it’s important to consider this. And I think there are three levels in the most similar areas. In fact, the basic level is one of technological standards. So, there are specialists in the field. that everyone can agree on. How to assess and reduce damage The type of process used and the product development process can be linked. National laws come next. This is the second level. Finally, the third level of global coordination. And I think it’s easy to look at the world today and say, « Oh God, we can’t do this ». There’s no consensus on anything on a global scale anymore. I think there can be a variety of methods. There are actually three options for visitors. If you live in Paris, you’ve used your vehicle to get around, to shop or to use the metro. You may have come from another European city and taken the train. Or you may have flown in from even further away. Despite all the disagreements in the world today, we still live in a world where it’s possible to fly from one country to another. And one thing’s for sure: no one is going to change the plane’s trajectory while it’s flying. Consequently, there is a coordinated regulatory structure to ensure that an aircraft can be built in a country and that it can leave the country. to travel to another country. So, these measures have been taken. It is necessary to be able to apply the most relevant elements for the governance structure needed for AI.
Already establishing a global agreement on ethical issues. And people thought, but they’re going to drag their feet because any regulation will prevent innovation. Secondly, nations naturally intended to pioneer and receive more investment for startups. Policymakers have objectives that may be a little different from one another. But in the end, it’s a very simple question. Why do we need technology? And ultimately, the question isn’t so much about the technology, the use of an algorithm and so on. It’s about the problems and challenges we face as humans. How technology can help us solve these problems. Our society is extremely fragmented. Sustainability goals must serve as a guide and compass for the development of these technologies. When you’re working in a lab and developing any technology, you have to ask yourself, is this really helping me to reduce these gaps? It’s said that everything changes and our lives evolve. Half the world isn’t connected to the Internet, or doesn’t have a stable connection. How can I turn such important values into something real? We need transparency and accountability. The rule of law is all you know. In the digital space, it’s not so simple that someone has to take responsibility and someone else has to be compensated for the damage caused if something goes wrong. Ultimately, the question is how we will shape the environment in which this technology exists with policies, institutions and investments. We all have to be smarter. I found the analogy with printing technology interesting. This is the field of printing. Indeed, it caused disruption in society in Europe and around the world, but ultimately it led to democracy, the French Revolution and the American Revolution and it made the world a better place. People were more informed and more aware of what was going on. So, because of AI in the future, we could see a new renaissance, and we want it to be a new renaissance. Because side effects or negative effects will be inevitable, we must seek to reduce the disruption. And that’s where governments need to step in. One thing is crucial. It’s about imagining a future where an AI system will combine every one of our interactions with the digital world in ten or twenty years’ time. As a result, we’ll have assistance available at all times. We’ll be able to ask it questions. It will have memories for us. As a result, assistance will be smarter than we are. It’s out of the question. The way we communicate and exchange information. As a result, there will be a kind of symbiosis between humans and machines. One will influence the other. It’s a bit like the problem of yin and yang, which are both interchangeable.
There can be no such thing as a repository of human knowledge and culture controlled by a handful of companies. It will be a knowledge structure similar to the Internet today, but it will be global. If the Internet infrastructure were subject to ownership. It would be a monopoly or oligopoly. Although it could have happened this way, the Internet was decentralized and built on open-source software. This should also be the case for AI. It should be freely available. It’s the only way anyone can do it. Fears, apprehensions and technology blockages. So it’s not accessible to everyone. And I believe this is a very real danger. However, there are measures such as ChatGPT and META. ChatGPT is a non-public technology. Shareholders own META. A separate organization owns OpenAI. What do you trust more than a technology owned by a separate association or a private group? It’s a question of openness and closure. Anyone can use it if it’s open. That’s a global issue, a significant difference between the two is not the right discussion to see to open and close discussion, regardless of the nature of the owner. It’s possible to discuss the nature of these structures, but openness doesn’t necessarily mean that the data is open and accessible to all. That’s not what determines openness. Even if we have all these resources and all this information. They’re black boxes because they’re not totally open. And when we talk about openness in everyday language, we mean transparency. Although open-source software is available, there is no system of democratic control. It’s a state of mind. As a result, large companies and corporations have a vested interest in discussing it in this way. The purpose of the closure is not to prevent us from making rapid progress by exploiting this confusion. It’s really… So, it’s not humans and machines that differ, but rather the nature of these technologies. Because we don’t know that there are people of different natures, are these technologies subject to democratic control and governance? There are those who have authority and those who are subject to that authority, the people in charge of managing the machine. That’s the real question. It’s impossible to predict whether the future of AI will be positive in a few years’ time. Although there are many reasons to be optimistic, the current context is one of division and increasing conflict. We’re seeing an increase in disparities. And if AI is included in these parameters, the effects will be amplified. But I’m well aware that some systems are better suited to hosting parasites. Consequently, the question of power arises.
This concept of « I’m smart and I hire people smarter than me » is fine, but does it perfectly describe all the people in your organization? However, I’m not convinced at all. so For the best of all worlds, you may think that everything will be fine, but no organization really works as you describe by hiring people smarter than us. A multitude of organizations, public and private, that refuse to hire perceptive individuals. It’s fantastic if they succeed in this way. Bravo, but that’s not the reality. I was very interested in the context here. an important forum with public players such as the G20. as well as technology players. Then we noticed that their position was close on the scene. AI is a serious problem for us. Look at AI now. Companies are making the decisions. Businesses determine outcomes. And the only way governance can work within this framework. If corporations are not nationalized and hated in the USA, or if we control them, they are not accountable to citizens because there is no social contract and they are not accountable to citizens. This will create a hybrid system that differs considerably from what we’ve known up to now. It has been observed somewhat in science fiction, they may be future ministers, such as a hybrid between a technocrat and a minister, with significant global influence, be aware of the imbalance of power. States have taken the decision to allow companies to invest in innovation. However, AI, a generative technological innovation, presents many risks. But there are biases, algorithms and ways of processing data. However, there was a reaffirmation of the responsibility of states to care for and protect their citizens. Yes, a response. No, we’ve dealt with a lot of complex issues. This is not something new. It’s incorrect to say so. Obviously, this is due to the fact that… Regulations work without being felt. For example, airplanes are safe and available worldwide. Regulation has been largely successful. Finally, I’m very optimistic because there are controls, problem checks and solutions to problems. For example, you can go into any supermarket in the world without fear of being poisoned. In the case of poisoning, this is a real scandal. Take a look at the regulations governing the agricultural sector. This is a very complex story. California was the first to change low-emission electric vehicle laws at a time when tens of millions of people lost their lives to low-emission electric vehicles. Although everyone claimed it was impossible, they finally did it. I can go on, because Germany and France face very complex issues every two decades. So the European Union is an example A perfect example of what we can achieve. After the two world wars, people were able to travel and settle throughout the European Union within a generation. So there are new and old problems. For example, lead in paint, which was used in Bonnescients, but whose price was eventually determined by the world. It’s not worth worrying about poisoning or the risk of poisoning children. So, lead has been removed and that has had an impact. This problem has existed for a long time, but it has been solved. So we’ve done it and we’ll continue to do it for AI. It was a question of hiring perceptive individuals.
I believe that AI will have the ability to predict all types of phenomena. The probability of contracting depression or becoming pregnant increases, and this protects the company’s interests. It may also enable states to take authoritarian measures. I don’t think states should deal with this in the same way as companies. When discussing these technical issues, citizens must be subject to democratic control through transparency. Consequently, the company should not allow companies or public authorities to use their surveillance tools. In this way, we will have a real fiscal responsibility towards these technologies. Then, if we adopt this approach in 2030, we’ll consider that it’s a problem similar to others that has been solved in the same way. To bounce back, most people don’t understand. It’s likely that this will lead to disparities in access to this technology and contribute to this revolution. What can be done about it? Research has been conducted over the last few months. You take experts and give them tools, and then you give AI tools to less-skilled people, leading to a rise in skills among experts and a rise in skills among laymen. It’s true that this can lead to disparities. However, to come back, rules are crucial. For example, systems that are essential to sustaining human life. Regulations are needed for all products using AI in various fields. For example, European vehicles must have AI-based safety systems. Then, various assessment tools, for example, to aid diagnosis of research carried out. However, I’m concerned about regulation because we’re talking about regulating research and development. I believe this has considerable costs. There is a limit to the computing power to train AI. Some people are talking about permits, permit conditions before an AI can be trained, because AI is dangerous. There’s talk of banning open-source AI to prevent it from being hijacked. The position of China and the United States is similar. A number of US policies have been aimed at imposing export controls on semiconductors, upsetting supply chain maps and severely affecting the development of Chinese technologies. This will have an impact on the development of IRA. With regard to gallium and germanium, China is currently proceeding in the same way. Graphite, which is needed to manufacture the batteries used in electric vehicles, is also a little more marginal. It is possible that the world’s 12 leading economies could trigger a global technological cold war, with serious consequences for various technologies. Clearly, this must be avoided in order to preserve the planet and guarantee the safety of the world’s population. It is absolutely necessary to monitor these issues closely. France’s interests are not necessarily consistent with those of China or the United States. This requires regulation. This or that should not be controlled by the private sector. There needs to be a public structure that allows input into the technology sector. Technology should have a voice, but not a vote. Authority must be exercised by governments who are accountable to the electorate. And I firmly believe that civil society must have a voice. It’s impossible for civil society to act effectively without access to information, which requires standards of transparency. This is necessary because things are going to get more complex in the future.
We need to make rational decisions. This will help solve the problems. Indeed, China and the United States will share concerns. It’s common to think that these two nations are passionate about each other. I also believe that these pressures will persist. These technologies are used all over the world. I think it’s very risky for governments in the South to buy platforms to manage their health and education systems. I think governments should develop their own technology, but the discussion has to be global. because it has an impact on all of us. However, it is possible to set up incentives to encourage good deeds and create safeguards to prevent evil. Most crucial is the need to invest in the capacity of governments. All these governments must have the capacity to develop these technologies for themselves develop these motivations. There are two important elements to this question. Firstly, there are regulations that define what different parts of a car do when it comes to regulation. But what kind of society is car-related? A society where emissions, climate change, exist on an infinite scale. This is something that cannot be tackled by technology alone. And that will clearly be impossible with AI. Even if states have strong national laws and regulations, it won’t be difficult for them to take similar action using nuclear weapons.
As a result, there will be societal changes despite your own regulations. So we should make our own rules. Our own laws, such as clean air and climate, are regulated within the global framework. Most countries have national regulations on air pollution, as in Europe or Canada, but in other countries, such as China, it’s impossible to breathe, with negative consequences for those countries. For example, it’s impossible to simply prevent facial farming, but we can legislate on our own territory. Genetic democracy means banning facial recognition, for example. It’s impossible to prevent others from doing them, because they can go to various forums, but we have to accept that these are not such difficult technologies to develop. There are also things that other countries and forums are doing that we’re not. Furthermore, by examining geopolitical tensions and nations that set favorable standards in terms of human rights and living standards, other nations wish to emulate these places as positive examples. Having these good standards in place is more important than knowing. It’s possible I could tell you more about it. Yes, it’s an intriguing query. I sincerely hope that functional democracies are the most successful. But I’m not sure if the model will always be the same with the development of AI. Reduced communications have strengthened democracy while weakening. Artificial intelligence is currently undergoing a revolution. If we look three or five years into the future, one question remains undecided. It’s uncertain whether a top-down society reinforced by the use of AI will develop or not, as well as what kind of governance we might see in the West, one that could benefit citizens while retaining a perfect.
A planned economy doesn’t really work. But is it true when a government can use companies to collect data in real time? I don’t know. It’s possible that a planned economy will be more efficient in such a system, while the free, regulated markets of Western countries will be less efficient. I wouldn’t wish for such an outcome, but I don’t know what the real result will be. These are crucial questions. I think we’ve all experienced this fear. It’s difficult to predict and understand. It will have an impact on many things that currently exist. When politicians try to regulate AI and strike a balance between capabilities and control in certain states, they will try to develop capabilities in Europe, but at the risk of not developing these capabilities sufficiently, they will try to control. Do you think this is fair and do you think there can be measures to improve AI capabilities? because the democratic quest can always be first in terms of AI capability. I think there’s a big difference between different governments within the European Union. Some are more subject to regulation, like the European Commission and Parliament, but in France, people tend to disagree. current AI law. He sees this as too limiting. Until last year’s summit, it seemed that there were more concerns about long-term risks, and the discussion eventually focused on different strategies for short-term risks, open or closed. Why does the French government support open source? Integrating culture is the only way to achieve local sovereignty and company standards. Platforms need to be opened up for this. We also have an open Internet for this reason. I sincerely believe that this has a negative impact on the opinion of certain governments. I don’t think the European government reflects life in general. So things are a bit reversed. Obviously, there’s a lot of discussion about regulation in the UK too, and the French government is all about business. knowing that Brussels is going to be firmer than Washington. Looking at the AI Act, the main aim is to ensure that all the rights developed by the EU over the last 30 years can be protected and promoted in an AI era. As a result, we are focusing on high-risk systems. Rights such as privacy, confidentiality, children’s needs, consumer needs, etc. are subject to risk. In all likelihood, these values and fears are widely shared. The ability to put all these pieces together should develop over time. Then there’s the question of how. There will be more disparities between the various bodies, particularly as regards the role of the regulator and the particular nature of regulation. As a result, all these issues will be discussed. Ultimately, do we want this technology to be interoperable, i.e. usable anywhere in the world? It’s important to understand that technology can only work with regulations if they are sufficiently interoperable.
This is the very essence of intelligence, whether human or machine. It is possible to predict the future based on all observable patterns. Our limitations derive from this. Therefore, we need new narratives, something new, a newer framework that might allow all these people to come together to start a new conversation. Especially when values are shared, we need to work harder and share the nuances. The idea is not so much to have a risk-based assessment or an ex-sante assessment, but rather to have a global conversation where everyone can learn. International organizations can offer an alternative basis for discussion. Using economic models that are not necessarily in line with human nature, they aim to dehumanize us. Our children are undergoing a change that is difficult to understand. Our children are being subjected to models that are difficult to understand.