We are in the AI Singularity



image.png
 
Last edited:
Barn-mucking #23,489,834:



AI has already existed for centuries

Did you know that AI has already existed for centuries? I bet you probably think that assertion is incorrect, or hyperbole or click-bait. It is none of these, and it is a true assertion. For something to be intelligent, it must have ends and goals and it must try to use the available means at its disposal to achieve those goals in a way that is "sensible". Your cat, for example, is definitely intelligent. It's not lingual and it is not capable of much of the abstract forms of reasoning which humans are capable of, but it's still intelligent. In fact, as leading AI researcher Yann Lecun famously said, "Your cat is smarter than ChatGPT". And so it is.

Artificial intelligence is intelligence that is constructed (by humans). We ordinarily associate this label with digital electronics but the field of AI itself -- AI proper -- is not tied to any particular implementation substrate. Digital electronics AI is just one kind of AI. Technically, a vending machine is a form of AI agent. It's not stupid, it doesn't spit out products randomly. It's smart enough to recognize fake coins from real coins, and it knows when you've put in enough cash to make a purchase, and it knows how to make change. It's no Einstein, and it's not even as smart as your cat, but it's not as dumb as a rock, either.

Vending machines have been around for decades, not centuries, so how am I asserting that AI has already existed for centuries? Simple: the corporation is an artificial intelligence. Corporation in the broadest sense includes even public organizations like parliamentary government --- a corporation is not the people who make it up (those are just the agents in the corporation), the corporation is the decision-making machinery which comprises it. The easiest way to see this, IMO, is to look at Robert's Rules of Order, which give the basic sketch outline of procedure used by any corporate board or parliament. There is a body called the board (or parliament or congress) which has posted meetings which are headed by a chair (administrator). The chair is the "CPU" of the meeting. The motions before the board are the "documents" which the board is going to "process". The primary output of the board meeting are decisions (commands). The commands are generated by the networked agents (board members) who are required to follow the board's network protocol (the bylaws and the rules of board procedure). The commands issued by the board are then conveyed to a robot which we call a factory, or facility, or warehouse, or fab, or whatever.

This structure is centuries old, dating back at least to the East India Company, and even earlier. If you include religious organizations in the analysis, it is thousands of years old.

Corporations are artificial -- they are constructed entities, they are not naturally occurring. And corporations are intelligent. They're kind of retarded, like vending machines, but they're a little smarter than vending machines (but not by much). Thus, corporations (in all their manifestations, including parliamentary government) are a form of artificial intelligence.

AI agents, or, "So you want to be a manager?"

All the hand-wringing about how AI is gonna "take urrr jubbbs!!" is absolutely ridiculous.

Let's suppose that there was a cave explorer who discovered an immense underground structure. There, he discovered an enormous population of humans who had been living there from time unknown. Surprisingly, these people turn out to be super-friendly and are happy to meet surface-people. In fact, they are so happy to meet them and escape the cave, that they are willing to do practically anything the surface people want, in exchange for nothing more than a daily meal and a place to sleep. Tens of millions of these fresh workers spill out into the economy. Would this cause an economic crash, boom, or have no effect?

As long as they are capable of learning rote tasks, this glut of labor would cause an economic boom. The cost of "getting stuff done", where "stuff" might be clearing brush, moving wheelbarrows, cleaning gutters, etc. etc. has just gone down enormously. This means that the operating costs of businesses (which always have to do these kinds of tasks) has just gone down enormously. Which means that businesses across the board can slash prices to attract customers. Which means that prices of all goods and services across the economy will go down. Which is equivalent to a universal pay-raise. Which is an economic boom. These cave-dwellers would be a far bigger economic "shock" than any Transformer-based AI system will ever be.

And what happens to the people who used to work these low-skilled jobs? Well, they know how to speak English, and they know the rules of American society (the cave-dwellers do not), so obviously, they are going to be paid to manage teams of these cave-dwellers. You will not be replaced by an AI agent... at most, you will be replaced by a team of AI agents... and your job will be to manage agents. Congratulations, you've been promoted to management!

Stewardship, liability -- the human social algorithm

No matter how amazing AI turns out to eventually be, the fact will always remain that it is ARTIFICIAL. The root "artifice" means "clever/skillful/artful", meaning, constructed by man. Thus, all AI/robotics will exist within the larger and original framework of human society and human ends.

Contrary to Marxism with its blank-slate theory of human society (that society can be "whatever we want it to be, as long as the proletariat agrees to it"), the reality of human society is that we are, and always have been, running software that we might call the human social algorithm. Human society is a distributed, wetware computation that is running this social algorithm, has always run it, and will always run it, regardless of AI/robots. When AI/robots are added to the mix, it will update our social algorithm, just as the invention of the automobile updated the manners of chivalry so that a man will hold the car door for his date. But notice that it is the artificial thing (the automobile) which has been adapted to the context in which it exists -- the human social algorithm. Chivalry is older and more primary, cars are recent and novel. So it is the car that gets adapted, not the chivalric framework.

In the same way, AI/robotics is going to be adapted to the human social algorithm. The core pillars of the human social algorithm are stewardship and liability. This is perhaps most obvious in the context of the military. Every piece of military equipment is on a ledger that is attached to some command. That command is the steward of the equipment, that is, it the duty of that command to see to it that the equipment is properly stored, maintained, secured, etc. And failure to carry out commands brings with it liability (to punishment). If you disobey an order, rule or law, you are liable to be punished. Punishment is the backbone of the human social algorithm, along with stewardship (the foundation of private property).

Note that AI is incapable of both stewardship and liability. The AI cannot actually care for something, because it has no consciousness. To care is a conscious feeling, it's not merely a set of actions. A pilot's care for the aircraft, crew and passengers (and his own life) is the sweat on his palms at the thought of crashing. That is what care is, and that is the foundation of stewardship. Nor can an AI be punished in any human sense. The AI is incapable of feeling anything at all, so it can neither care to be rewarded, or not to be punished, it is necessarily indifferent.

Since these principles are the bedrock foundation of human social order, and since AIs are incapable of being affected by them, as humans are, there will necessarily have to be a membrane between human social order and AI/robotics. This membrane will be the AI/robotic managerial class... those people who manage, command, train, maintain, repossess, dispose of, etc. AI and robotics systems. In short, the future is Blade Runner. Those on the human side of that membrane will not have to care about AI/robotics at all. After the hype dies down, AI/robotics will become a permanent slave class, like the cave-dwellers but not nearly as intelligent or useful. Eventually, I predict that humanoid robotics will become a kind of untouchable class ... widely despised by ordinary humans as abominations that we tolerate because they are useful to us, and nothing more.

The myth of hyper-productivity

Back in the day, before mechanization, it was not the man who could run the fastest or plow the most ground who was revered. Rather, it was the man whose life was ordered, and who had acquired wisdom through experience. The man who worked in an organized, orderly way, always moving closer to his goal, this is a man who had acquired the calm centeredness of always knowing his next step and, therefore, free of the frenetic scurrying about of lesser men. This is what we used to revere as wisdom.

"Productivity" can also be called "actions per minute" (APM) in a gaming context. APM is a measure of game engagement. It is completely useless as a measure of real skill and productivity -- if anything, it's a measure of how good you are at looking busy. "Measure twice, cut once." That second measurement is slow, and decreases APM (increases actions per widget "produced") But measuring twice and cutting once is the essence of real mastery. And real mastery is better than productivity for the same reason that one brick house built on a concrete foundation is better than 10 yurts dropped off in a field somewhere. "We are building houses at 10x speed!!" Yeah, but what kind and quality of house?! Will it be around in 10 years from now? Will its quality have massively deteriorated, or will it still be as functional as the day it was built? That kind of long-run thinking is the opposite of "hyper-productivity" or the silly term "10x engineer". 10x of WHAT? 10x lines of slop-code that is filled with AI bugs? It will take time for the difference between the yurts and the brick house to be easily visible. But not that much time... very quickly, people will see how sandy the foundation of all this AI hype really is.

What about entry-level jobs?

The war on entry-level (anything) is already well-documented and has nothing to do with AI per-se. Yes, AI will make it worse, but that's because AI is just a new tool that those who are at war with entry-level people can use to keep kicking them down, even more efficiently. The root problem is not AI, it is evil people who think that kicking down those who are junior to them is a recipe for their own promotion in life. These kind of people are evil and insane, and shockingly prevalent. The propaganda about eliminating all entry-level jobs is just doom-porn for the psychopaths who are already kicking out the ladder. This content is popular because those who are already doing this are looking forward to even more powerful ladder-kicking tools in the future. But there is absolutely nothing in AI that is inherently hostile to entry-level jobs. It makes no sense, it's just fear-based superstition.

AI hype is pure propaganda and brain-poison.
 
Last edited:
File this under "AI Hype-Bro loses his religion"...



---

image.png


Black Swan (2010) - Ending/Final Scene

This movie captured the missing ingredient that AI slop "art" will never have. Real art ... is dangerous. The real artist invests their blood, sweat and skin into their art and that is what makes it art in a way that nothing else can ever be. The ultimate work of art is, necessarily, also a sacrifice. There is no lesser price that can be paid...
 
Last edited:
Grok 4 released --- lots of hyperbolic claims about hyper-intelligence, etc. Get back to me when Grok 4 can beat ARC2. Or even ARC1 without fine-tuning.

:rolleyes:
 
There is an AI dataset with the ridiculous name, "Humanity's Last Exam". Here is humanity's actual last exam:

image.png


Now is the time to get ready for pencils-down!
 
Technical video, but worth it for those who want to deep-dive into what is really coming in AI's future, despite the hype:



We argue that representations in AI models, par-
ticularly deep networks, are converging. First, we
survey many examples of convergence in the lit-
erature: over time and across multiple domains,
the ways by which different neural networks rep-
resent data are becoming more aligned. Next, we
demonstrate convergence across data modalities:
as vision models and language models get larger,
they measure distance between datapoints in a
more and more alike way. We hypothesize that
this convergence is driving toward a shared sta-
tistical model of reality, akin to Plato’s concept
of an ideal reality. We term such a representation
the platonic representation and discuss several
possible selective pressures toward it. Finally,
we discuss the implications of these trends, their
limitations, and counterexamples to our analysis.

Paper: Platonic Representation Hypothesis

Website: prh
 
Sabine drops the ball again... :eyeroll



If there is ONE message that people need to take from this thread, it is this: The idea of recursive self-improvement is a total and absolute myth. It's a popular sci-fi trope, emphasis on the FI. First of all, we can prove, mathematically, that recursive self-improvement in the sci-fi sense is impossible. It's not "maybe possible" or "depends on evidence" or "maybe if we have a new physics discovery" ... it is provably impossible unless you can do an infinite number computational steps in a finite amount of time (including any sequence of computational steps; we're not talking about just a sub-language, but the entire universal language L_u).

Second, we've already been doing "recursive self-improvement" for centuries. The Gutenberg Press is the first example, but more recently, we can point to the use of AI algorithms in computer chip design. It would be impossible to build chips with 9+ billion transistors on them, without the use of AI algorithms. Humans couldn't even "copy paste" 9 billion transistors in a uniform grid -- it is all laid out by automated AI algorithms. These algorithms have been quietly "recursively self-improving" your computer hardware for decades, without you even knowing about it. In the last decade or so, Moore's law has not held, meaning, transistors per die is not increasing at the same exponential rate as before, and clock frequencies stopped increasing almost two decades ago. Yet, computers are actually faster than they have ever been and, by some real-world benchmarks, are improving at an even faster rate than Moore's Law. Why? Because of the constantly-improving AI algorithms used in chip design, that's why. In other words, this is what real recursive self-improvement looks like, that is non-hype recursive self-improvement.

Recursive self-improvement in the sci-fi hype, singularitarian cult sense is just cannibalism, nothing more. We know from the anthropological record that humans were once quite cannibalistic. Were we "recursively self-improving" because of cannibalism? Don't be ridiculous. If cannibalism is the sine qua non of "the intelligence explosion" then cockroaches must be hyper-intelligent already! Recursive self-improvement in the sci-fi sense is the AI equivalent of Perpetual Motion Machines. Provably false, hype mumbo-jumbo being peddled to boost sales of snake-oil. Sabine is falling for it just like most wide-reach AI commentators.

Reject the propaganda!

PS: The Darwin Godel Machine mentioned at the end is not hype, it is built on a robust theoretical foundation. However, we know (again, from mathematical proof, not just "empirical evidence") that these machines will not solve the hardest problems any better than hand-written algorithms would. Rather, they will be able to quickly mine easy problems (at-scale, meaning, far faster than humans can manually do this work) and eventually solve problems of moderate difficulty over time (months, years, decades, etc.), but they too will be stumped by the hardest problems, just like we are. This is the point where the AI propaganda gets exposed... the hardest problems, like computing the bits of Chaitin's Omega constant, or estimating Solomonoff's Universal Prior. These are not just abstractions, they are at the very core of what it even means to "solve problems". AI will be just as stymied by these (extremely important!) meta-problems as humans are, and provably so. Reject the propaganda BS!!
 
Last edited:
Barn-mucking #52,340,039,140:



I find it hilarious that this guy has unironically pasted a soy-boy meme of himself in the thumbnail without even realizing it ... :LMAO

image.png


Here are his phases of AI acceptance:

1) Ignore it
2) Skepticism
3) Curiosity
4) Acceptance
5) Optimization

I fast-cued through some snippets of this because I'm not going to listen to the one-millionth AI hype-pitch about why I should believe that 2025 is the year that literally everything becomes AI. He doesn't seem to be as irrational as most AI hype-bros, but he's still missing the actual stakes involved and ends up missing the mark in respect to how AI is actually going to change software (among other things).

Let's break it down in terms of economics. Let's suppose Bob wants to build a house. Bob has no house-building skills or equipment (or time), so he's going to have to pay somebody to build his house for him. So, Bob calls up Charlie and hires him to build a house. Now, Bob is going to give Charlie, say, $250,000 and Bob expects to receive in return from Charlie one completely built house. That includes all the things that a house is supposed to have, and none of the defects that are unwanted in a house. Charlie is supposed to build a house that Bob wants, according to Bob's specifications. If Charlie can't do this, he's supposed to tell Bob right at the outset and this will even be written into the contract to MAKE SURE that Charlie is going to hold up his side of the deal. Bob's reason for believing that Charlie can and will deliver the house in exchange for the $250k is that Charlie has done this kind of project before, successfully, with positive reviews from satisfied customers. Perhaps Bob got Charlie's number by a word-of-mouth reference from a trusted friend who had earlier hired Charlie.

Why am I talking about house-building? What does that have to do with AI and software? Well, notice that everything in the previous paragraph is inextricably human. Every single component of what is happening, here, is thoroughly human ... human agreements, human expectations, human exchange, human communication, human friendships and social networks, etc. What can AI add to these building-blocks? Absolutely nothing. It can facilitate them (just as the Internet or phones facilitate them) but it cannot add to them or enrich them in any way. The AI can at best be a matchmaker, a go-between, a finder, and so on, and so forth. Those are valid activities, but they are all service activities, they are not legally agentic activities. And that's where the rubber meets the road: legal agency.

The important thing about Charlie's agreement to build the house in exchange for the $250k, from Bob's perspective, is that if Charlie for some reason doesn't do what he has agreed to do, Bob has punitive recourse against Charlie. Bob can sue Charlie or even have him charged with criminal extortion depending on the severity of the contractual breach. Charlie is a legal agent, he is a decision-maker who can be held to legal account in a court-of-law for his decisions. In this specific respect -- legal agency -- AI can, by definition, add absolutely nothing whatsoever, not even as a service! AI is strictly out-of-scope in respect to legal agency because AI cannot be punished!

Now, let's say that Bob wants to build a new mobile app. Bob has no skills in app-building, any more than he does in house-building. So, Bob calls up Dave and contracts with Dave to build an app for him. Bob is going to pay Dave $25,000 for the app that is to be delivered, according to XYZ specification which is contracted when the agreement is signed and the payment made. In respect to the economic transaction occurring here, this is precisely the same situation as the house-building, the only difference is that the product being delivered is digital, rather than physical. Once Dave is paid, he spins up an AI bot and has the AI write the entire app for him. With just 10 hours of total work, start-to-finish, Dave has a fully functioning app ready for Bob to start testing. To avoid shocking Bob too much, Dave waits until the end of the week to make final delivery of the app. Bob is still astonished at how quickly Dave was able to do the work, but pleased that he can start testing the app right away and get it ready to be formally launched to his customers.

Notice how the AI's role in this example is purely incidental to the fundamental transaction that took place. Does Bob care if Charlie hires humans to build his house, or whether Charlie utilizes some advanced, whiz-bang home-building robotics to build his house? Absolutely not, all that Bob cares about is that the house is built to specification, according to the contractual agreement, on time, and that this work is completed for the agreed price ($250k). The same applies to the mobile app that Bob pays Dave to write. It does not matter to Bob whether Dave built the app in 10 hours or 40 hours, nor does it matter to Bob whether Dave used AI or not. What matters is that the app works to specification, and is delivered on time for the agreed price.

All of this "AI will replace us all!" crap is made-up nonsense being pushed by delusional people who probably shouldn't be allowed outside of a padded cell. They are confusing the shovel for the man who uses the shovel to dig a ditch. If I pay you to dig a ditch, I'm not paying you to "use a shovel", I'm paying you to dig a ditch. If you rent a Ditch-Witch to do the job more quickly and efficiently, so be it, that's your choice in how to do the work that was contracted.

Understanding the boundary between "what" and "how" is absolutely crucial to understanding the future of AI. In engineering, we call this "specification domain" versus "solution space". The specification domain is WHAT is to be done. The solution space is HOW it is done. AI may change solution space ever so much, but it cannot -- by definition! -- alter the core components of human agency which comprises the specification domain. Bob is paying Charlie to build a house, not to use a particular building-technique, such as hand-hammering all the nails, versus using nail-guns. Or, if that were to be specified in the contract because Bob cares so much about it for some reason, Charlie is going to adjust the price for the job accordingly. But no matter how much of the "HOW" is specified in the contract (which determines "WHAT" is to be delivered), the WHAT-ness of human agency can never be eradicated or even altered! Charlie remains a legal agent who is liable for his delivery (or non-delivery) of the contracted goods! All of this applies with equal force to the software industry as it does to home-building. AI cannot add one iota of value to human legal agency because AI cannot be punished. It really is that simple.

So, regardless of how much software developers dig in their heels against AI -- or don't -- the reality will remain that AI is just a tool. The Silicon Valley agenda to transform the public perception of AI as something beyond merely a tool is really a religious agenda. They are trying to push a new religion and a new god onto you, all without even disclosing that they are preaching a new gospel to you. They are trying to pass this all off as "technological progress", but it is no such thing. It is a new entrant into the pantheon of world religions, trying to camouflage itself as "the inevitable march of technology". As soon as AI is presented as crossing over from mere solution-space, into the specification domain -- that is, as soon as AI is no longer a glorified shovel, but is actually telling you what ditch you may or may not dig -- it is no longer a tool or a technology of any kind, it is a religion making theological claims about the powers and authorities of its new-fangled deity: the AI.

Before all is said and done, this is going to get UGLY...
 
Bill Gates spills the beans on the real agenda behind AI:



AI IS THE BEST TOOL THE MARXISTS HAVE EVER HAD

There is not a shred of doubt in my mind that Marxism is 100.0% satanic, to its very core. It is the very economics of Hell itself. The Beast World Order plan has always been to converge to a global Marxist tyranny ruled over by a "god" in the flesh, the Antichrist, who will be the ultimate Big Brother entity, far worse than 1984. Don't let your guard down for a second... AI is useful, obviously, but it is also a weapon that lends itself to the Marxist agenda in countless ways. The global "eLiTeS" think they're ushering in a future Olympia where they will become gods in some bizarre ET pantheon, ruling over the earth like their own private hunting preserve. But this is not what will happen. They will all be stripped of everything they "earned" in the Beast System, they will be hunted down, then stuffed and mounted like trophies on the Antichrist's wall, while he flies around the world like Superman torturing and murdering everyone he feels like, in the open air. All of humanity will go into absolute squalor and destitution except those who take the Mark and become zombie puppets of the Antichrist. The nightmare that is actually coming will swallow whole the most nightmarish fantasies that all the tyrants of the world, both known and unknown, have ever concocted in their wildest flights of fancy, added together. And after all is said and done, they will be at the head of the line to go into the lake of fire. AI and robotics are going to undo the limits of physical embodiment ... and that's the worst possible news because the limits that God has put into place were put there for our protection...
 


Comment:
@allpapiodin
2 hours ago
Video posted and in 15 mins, Gemini AI already slapped a "Climate Change" blurb under the video that is not even about climate change. Wooo!!!
 
In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen

Written by Colin Cornaby



As a restaurant owner – I’m astounded at the rate of progress since microwaves were released a few short years ago. Today’s microwave can cook a frozen burrito. Tomorrow’s microwave will be able to cook an entire Thanksgiving Dinner. Ten years from now a microwave may even be able to run the country.

Recently I was watching a livestream of a local microwave salesman. He suggested that restaurants should cook all their food in a microwave.

We all need to transition to this way of cooking, because clearly this is where the future is going. I expect in a few short years kitchens will be much smaller. Gone will be stoves and ovens and flat tops. Restaurant kitchens will only be a small closet with a microwave. I predict this will happen by 1955 at the latest.

Many chefs I know get upset at me when I tell them this. But this is the truth: If you can’t cook everything you make in a microwave thats a skill issue. You need to learn now because when everything is cooked in a microwave you’ll be out of a job. When microwaves are everywhere you’ll be so far behind you’ll never learn how to use a microwave. Chefs who use tools besides microwaves are luddites. They live in fear of the future.

If you want to learn how to use a microwave I would suggest starting with my $49.99 two week course. You should also subscribe to my blog.

Recently I was banned from my favorite chef subreddit for posting pictures of all my microwaved food. I was told I was spamming. These are the types of emotional people I deal with. But much like any other discriminated against group I am fighting for acceptance. If my microwaved food triggers you then you clearly are not ready to accept the future of all food.

At my restaurant I’ve moved all my employees to exclusively using microwaves. After I threatened to fire any employee that complained everyone told me the microwaves were great. But I only threatened them so everyone would love the microwaves.

Thats not to say there haven’t been growing pains. When a great steak comes out of the microwave I get really excited. But more than half the steaks that come out of the microwave get sent back by the customer. To solve this problem I now run ten microwaves in parallel cooking ten steaks. One out of the ten steaks will most likely be good. The number of microwaves has required me to upgrade my restaurant’s electrical system and I now have a small nuclear reactor installed in the parking lot.

I saw online another restaurant owner suggested deploying one thousand microwaves for each chef. This sounds like a great idea. The restauranteur also has heavy investments in microwaves and might be over leveraged. I try not to think about that too much.

One of my chefs mentioned that if they could cook the steak on the grill they could get it right the first time. This is not an acceptable attitude in the microwave era. Chefs have fragile egos and they all seem to enjoy cooking (???) so it’s obvious they’re just too attached to the food. Also they’re worried I’m planning on firing all of them. That’s true but not relevant here.

I’ve solved this by putting blindfolds on all my chefs. In no circumstances are they to look at the food. I don’t look at the food either. Looking at the food is how restaurants in the past operated. We don’t work that way any more. Have there been several poisonings? Yes. But the food gets out much faster now.

What’s that you’re saying? In a world where everyone has a microwave that microwaved food will be a commodity? That my chefs and my quality are actually what will distinguish my restaurant when everyone has microwaves? Microwaved food is extremely copyable and will become more difficult to build a unique business around? If I make my chefs exclusively use microwaves they’ll forget how to cook and I won’t be able to even pivot back?

Listen. First – you need to calm down. This is the sort of emotional response I’m talking about. You’re clearly irrationally anti microwave. And that sounds like a next quarter problem – and we don’t talk about next quarter problems.

Second – you need to realize I’m an idea person. Ok? Who else would have thought about putting pepperoni on a pizza? And if I didn’t have a microwave no one may have delivered that idea at all. With a microwave I was able to deliver that idea much faster. The new economy will be purely idea based. Is the quality of a microwaved pizza worse? Sure. But by 1960 cooking pizzas in ovens will be a thing of the past. I don’t have any evidence to back that up. But any rational person can see in a few short years ovens will be gone.

And finally – what do you think is going to happen? That microwaves are just going to go away someday? That we’ll all go back to cooking exactly the way we used to? Well you didn’t say that but thats what I want to talk about. Because clearly there is no middle ground and everything has to be done in a microwave. You’re being very inflexible about doing everything in a microwave and that won’t serve you well in the new microwave era. And we all know that a microwave is the solution for everything, has no underlying problems, and the rate of progress on microwaves will be infinite.

Anyway, I have to go. I’m busy tracking the minutes my chefs are running their microwaves so I know who to fire. This is a foolproof system that there is absolutely no way to game.
 
Back
Top