Put this in the complete waste of time category.
Politicians will be spending a great deal of time talking about AI regulation. aLL over the world, they will hold hearings, fund research groups, and issue orders to get a handle on AI. After all, if it is emerging, politicians have to control it.
This is a futile exercise.
I am here to state that governments cannot regulate AI. It doesn't matter the country, government, or political leaning. None of them are going to have a chance.
AI Is Beyond Government Control
A major part of the fear around AI is the Hollywood conditioning. Another is the fact that humans have a tough time dealing with something that is more "intelligent" than they are.
That said, there is an issue, the idea that AI is going to achieve this is not likely. When we delve into what it means to be conscious, or what is intelligence, things get very fuzzy. There are all kinds of intelligence that we tap into that computers do not have access to.
What about historical memories? Emotional intelligence? Spiritual? Genetic? How does something that is deterministic replication something that is non-deterministic?
Then we couple that with people who are holding legislative (and executive) positions that are not even remotely qualified to approach this subject. Some might claim that is the case with every topic but we will try to give some benefit of the doubt.
AI is a rather tough subject. What is making this so overwhelming is the pace things are traveling.
For example, California took a stab at putting together an AI bill. Actually, it did that and the Governor vetoed it.
Part of that was requiring models to be registered based upon size. That seemed like a sensible approach. If the bill would have been signed, it would already be obsolete.
Smaller models are being trained that have more "punch" than the larger ones a year ago. So a parameter based upon size is no longer valid, even before the bill would have taken effect.
Open Source
So what is the answer?
To me, we need to focus upon open source. Along the same lines, we need to decentralize as much as we can.
My biggest fear right now is Sam Altman. The guy has shown himself to be seeking regulatory capture. This means he wants the government to put all the balls (or the majority of them) in his basket.
The logic he espouses is that we can not trust the masses so AI technology has to be in the hands of a few trusted corporations. The problem with this concept is Big Tech is nowhere near the point where we can say it is trustworthy. While I would not put it in the same category as Big Pharma, trusting these corporations to operate in the best interest of the masses is absurd.
A better approach is to be able to see what is going on. We also are seeing the incorporation of open models into government systems.
Meta is one that is moving in that direction.
Meta is opening up its Llama AI models to government agencies and contractors working on national security, the company said in an update. The group includes more than a dozen private sector companies that partner with the US government, including Amazon Web Services, Oracle and Microsoft, as well as defense contractors like Palantir and Lockheed Martin.
It is easy to hate Zuckerberg and I will be the first to admit that he is not to be trusted. At the same time, we know there is a self-serving need to what he is doing. The positioning of Meta is intentional. Obviously, this is a smart business move.
That said, we don't want to depend just on Meta. Others have to step up. There is safety in numbers. Hopefully we will continue to see smaller, yet powerful models emerging that can help to offset the dependence upon the larger ones.
This is something that we are all going to have to figure out. To me, it starts with not feeding the Big Tech players as often as possible. Naturally, we cannot completely avoid them but continually running to them makes no sense.
Collectively, there are steps we can take to help alleviate some of the risk that exists.
Posted Using InLeo Alpha