LeoAI Solves: Not Your Weights Not Your Brain

in voilk •  3 days ago

    This is a concept that is floating around the AI world:

    Not your weights, not your brain.

    It is something worthy of consideration. This is also aa danger that is associated with the entities behinf the LLMs.

    When dealing with Big Tech, we have to be leery of what they are doing. It is well known that Meta and Google employed a number of psychological tactics over the years. With AI, this is even a more exteme situation.

    For this reason, we will dive into weights and how LeoAI solves this problem.


    Image generated by Grok

    Not Your Weights, Not Your Brain

    The power of generative AI is enormous. We are going to see this become even more powerful over the next couple years.

    For this reason, it is crucial to understand what we are dealing with. Weights are something that don't get a lot of discussion but are crucial to the output that a model will provide.

    Here is what Venice.AI pulled up:

    In AI training, weights refer to the numerical values assigned to each connection between neurons in a neural network. These weights are learned during the training process and play a crucial role in determining the network's performance.

    Think of weights like the strength of a muscle in your body. Just as muscle strength can be increased or decreased through exercise and practice, weights in a neural network can be adjusted to optimize the network's performance.

    One of the types of weights is bias. This is where the training parameters are adjusted for bias purposes. Hopefully you see where this can be a problem.

    When it comes to model development, the weights are what determines what is contained. Thus, if the decision is to not have adult content, that will be assigned a different weighting as compared to geography. Basically, the training negates the adult content.

    This might seem sensible when it comes to building a bomb. However, we can see how this can easily move into an area where things are filtered. Unfortunately, the one making the decision as to what is important is not the users.

    Here is where the dilemma arises. Bias is something that is programmed in this. This can be related to a host of topics, ideology and adult content being the two biggest. That said, it is possible that anything to do with guns, even in a historical sense, could be filtered.

    In other words, we have models that basically censor information.

    Not Your Brain

    The idea of synthetic cognative abilities should be clear. With generative AI, the output is cognition. automobile factories pump out vehicles; generateive models spew forth cognative abilities.

    Of course, this is something that was associated with the brain. Throughout history, for humans, this is mostly what people relied upon. However, we are now seeing a different world forming.

    The major consideration is to forecast ahead to what we might be dealing with in a few years. Perhaps this doesn't resonate with people since we mostly have chatbots. Things take on a different meaning when an AI agent is being developed to be your digital twin.

    Do you really want Meta, X, or Google making the decision for you as to what is important? How about Sam Altman?

    I think the answer is clear on that one.

    LeoAI Is The Solution

    LeoAI is using Llama as the base model. It is open source, allowing anyone to utilize it.

    Again, we will consult Venice.ai for some more insight:

    Fine-tuning: Existing models like Llama2 are fine-tuned on a specific task or dataset to adapt to the new task or dataset. This involves adjusting the model's weights to better fit the new data.

    Weight sharing: Weights from the existing model can be shared with the new model, allowing the new model to leverage the knowledge and features learned by the existing model.

    The fine tuning of a model is crucial. Here is where the LEO team can alter the weights that Meta used. The fine-tuning process is where the split from what Meta has to what Leo is creating starts.

    There are a number of other factors which can be altered in the development of a new model. One of the simplest to comprehend is the addition of new data. This, combined with different weights used during training, will result in different capabilities.

    One of the major discussion points regarding AI models is bias. How can this be eliminated?

    Some of it is in the weights, while another portion is addressed via the vector database that is being used. With additional data, biases can be reduced due to the fact the model has another resource to access. This also can eliminate many of the hallucinations commonly associated with this technology.

    Leo has the ability to produce a more neutral model. Since it is taking the data from Hive, people have the ability to keep feeing what the model ultimately utilizes. We also have a direct line to the one who is heading up the development of this.

    Try to do that with Google or Llama.

    This is going to be a factor as we advance further into the AI world. It might seem nonsensical at this point but this technology is going to take over everything.

    Simply put, this is another reason why blockchain data is so important. We can find the weights that Meta used but do not have access to the data.

    But that is a topic for another time.


    What Is Hive

    Posted Using InLeo Alpha

      Authors get paid when people like you upvote their post.
      If you enjoyed what you read here, create your account today and start earning FREE VOILK!