There are 8 billion people who exist in the world today, and they are being watched by things that aren’t human. You are being watched right now, by metallic eyes that orbit far above you, outside our planet’s atmosphere. But they do not just watch you from space.
They’re in your computer. They listen through your phone. Even right now, as you’re watching this video, be advised; something is watching you back.
They examine the patterns of what you buy, of where you go, of what you search for on Google; all the data that exists in the world about you, they attempt to scrutinise. They are AI. And they are here to stay.
To some degree, AI is an aspect of modern society that we’ve come to accept as normal, even if we’re a little uncomfortable about it. But if you think we’re talking about only commercial AI today – the algorithms that companies use to try to convince you to buy things – you’re wrong. There is something that governments have been developing that exists at a level of power and complexity above that – military grade AI, the sort used by surveillance agencies and battlefield operators.
Hidden in hard drives in a secret, remote location in America, there is one AI in particular we’re going to take a look at. Its name? Sentient.
Its goal? To predict the future. I’m Alex McColgan, and you’re watching Astrum.
Join with me today as we discuss the classified AI that America is using in its space-based spy programs – one that has so much autonomy, it doesn’t just receive data from satellites, it actually directs them. What exactly can Sentient do? And what does it mean for humanity’s future, both in this world, and off it?
Obviously, as a disclaimer: this is a classified program, so there are many things we don’t know. But by looking at the officially publicised statements and accounts from former officials, and by examining similar commercial AI in the private sector, we can make some pretty educated assumptions. Using AI in intelligence-gathering does make a lot of sense.
Whether you are a company trying to take advantage of market trends, or a nation trying to stay on top of threats to your security, it’s useful to know what everyone is doing, and also extremely difficult to properly analyse everything. As I mentioned at the beginning of this video, there are 8 billion humans on Earth right now. Each one has a complex life filled with routines, hobbies, interests, and affiliations.
The sheer amount of data that it would take to keep track of everyone is overwhelming… as illustrated by how hard it is to keep up with all your friends on social media. To some degree, this is true for space as well. There are approximately 100 billion stars in our galaxy alone, and 2 trillion galaxies in the observable universe.
If you took as little as a second to look at each one individually, it would take 6. 3 quadrillion years to see them all. Keeping tabs on so many stars is a mammoth task beyond the scope of any one person, or even a large group of people.
So, for surveillance or science, there are immense benefits for the organisation or nation that can trawl through the endless data and find patterns or anomalies. And AI is extremely good at processing large amounts of information quickly. This makes it wonderful when it comes to the world of astronomy, as it can identify interesting observations for humans to take a closer look at.
Only, AI doesn’t care whether we’re above the microscope, or beneath it. And the sort of AI that are being used to look at the actions of humans are becoming much more powerful than the ones looking at the stars. Which is why you may find yourself in the uncanny situation of talking about a subject with your friend, and then suddenly you find adverts for that same subject on your phone.
This usually isn’t just coincidence. Rather, this is your digital footprint being examined for marketable trends. We all have a digital footprint.
Every time we make a transaction, visit a webpage, log in at a certain IP address, or google a search term, we reveal a little bit about ourselves. Googling pet stores? You probably have a pet, and might be in the market for pet food or products.
Buying a plane ticket? Well, if that’s a holiday, travel companies might be able to convince you to buy a hotel room, travel insurance, sunglasses, swimming suits. Businesses are keen to gain this information, and often we give it to them freely.
Whenever you see a pop-up on your webpage asking you to accept cookies, chances are those cookies relate to tracking what you do on that page. Apps on your phone ask you permission to use your microphone. If you grant that permission without reading it, there have been some unscrupulous cases where phones use your mic to listen in to your day-to-day conversations, where they’ll look out for keywords.
That information will then be sold on to advertisers, so they can better know what to sell to you. This can already be unfortunate, as you might not want certain adverts popping up on your computer. A pregnant woman might not want to tell the world she’s pregnant yet, but if AI analysing her data figures it out, she and her partner might get adverts for baby gear, potentially tipping anyone off who happens to see their phone.
But what happens if we’re not just talking about businesses, but governments? Then the means of information-gathering become much more vast, and the stakes much higher. Governments like the US are interested in protecting the lives of their citizens from foreign threats, and being able to form a digital profile of persons of interest can be the difference between a bomb threat being prevented, or hundreds of lives being lost.
So governments will use every trick in the book to gain as much information as possible. Google searches. Shipping patterns.
Spy satellite images and video. Financial transactions. Weather reports.
All these things help paint a picture, but the sheer quantity of information available out there has always made it difficult to analyse everything to spot the patterns. Unless, of course, you have a really powerful AI. AI like Sentient are not just able to analyse data to spot patterns.
Sentient is an automated, learning, adapting AI with the ability to direct satellites to locations of interest. Again, a lot of the information about it is classified, but the NRO in 2016 did release some information about it, revealing some clues as to its abilities. Sentient is described in this document as Data-ingesting and processing, meaning specialists will feed it vast quantities of data.
It is Sense-making, meaning it will evaluate the data itself to try to discern patterns of behaviour in what it’s seeing. If a foreign power’s jets are normally stationed in one airbase, and suddenly they’re congregating at another near the border of a neighbouring nation, Sentient can conclude that they might be about to launch an attack. And then Sentient can take that one step further.
Rather than just flag what it’s identified as interesting, Sentient is able to direct satellites to photograph certain locations to fill in gaps in its knowledge. That’s where the learning, adapting part comes in. Once the satellites are in place and Sentient can see what they see, it’ll evaluate the data and try to predict where it needs to go next.
The whole process is completely automated. How do we know this is real? Firstly, because commercial companies are attempting to do the same thing.
BlackSky is using its large numbers of satellites driven by automated AI to give buyers “foresight”. By using space-based intelligence, BlackSky is attempting to build a picture of what’s happening right now on battlefields and in foreign countries - a repaired bridge, a new airport being constructed. Put raw data like that through the right kind of analysis – AI-driven analysis – they’re figuring out what’s about to happen too.
That new bridge might hint at the need to transport troops quickly to a new warzone. The airport might serve as a staging base for fighter jets in a planned offensive. In their own words, through AI, they hope to give companies the chance to act not just fast, but first.
BlackSky is scanning ports around the Black Sea with frequent overhead flybys of small satellites. With its 15 passes a day, it has made 70,000 ship detections, helping it build a picture of exactly where Russian ships are, and where they’re likely headed, and where they might be vulnerable. This information is part of the reason Russia has lost so many of its ships to Ukrainian attacks.
And secondly, when in 2016 at a Space Symposium a National Geospatial-Intelligence Agency (NGA) executive was asked directly about how good military and intelligence community algorithms had gotten at interpreting data and taking action based on those interpretations, they simply responded, “That’s a great question. And there’s a lot of really good classified answers,” before swiftly moving on. If BlackSky can do that, it is assumed that Sentient can do that and more.
BlackSky does not just use satellite images, though. As well as from 25 satellites, it uses data from 40,000 news sources, 100 million mobile devices, 70,000 ships and planes, 8 social networks, 5,000 environmental sensors, and thousands of Internet-of-Things devices, according to a Verge article on the subject. What does Sentient use?
Presumably, a lot more. A retired CIA analyst suggested the answer is “everything”. Images, financial information, weather records, pharmaceutical purchases.
All of it paints a picture, all can help identify patterns – or abnormal behaviour. There are even reports of it keeping an eye out for UFO’s - I’d love to know what it found. Obviously the NRO isn’t keen to reveal exactly what Sentient is paying attention to, as then foreign powers could try to manipulate that information for their own advantage.
But the sheer scale of information it consumes must be vast, and unlike humans who would be swamped trying to make sense of it all, Sentient can swiftly skim through the noise to identify the key points of interest. Is this a problem? Yes and no.
On the one hand, it certainly suggests that privacy is going to be harder and harder to come by. I’ve spoken before about how satellite images are becoming higher and higher resolution, and this could mean governments can keep tabs on you at all times. Now, they’re not just looking at you physically – they’re examining your social media feeds, your spending habits, and more.
Which is… fine if you have nothing to hide. But it is unsettling, nonetheless. Yet there will always be benefits.
Sentient will be able to identify threats, which will help the Government prepare for them and keep its citizens safe. AI programs never tire, and never stop. If science ever gets AI of the same level, we might one day see space missions that are completely automated - AI looking through telescopes to identify places they want to learn more about, and then launching their own probes to take a closer look, with an army of data-analysing AI to evaluate the final result.
A lot of fascinating discoveries could be made that way. If it’s any consolation for American viewers, Sentient is bound by the usual NRO restrictions on when the US can spy on its own citizens, which is not all the time. But don’t rest too easy – if the US is developing AI like this, it’s very likely that other governments and private companies are doing the same thing too.
Even the Pentagon is concerned about online profiles being built up on their staff – filled with information that foreign spies might be able to utilise. Beyond that, problems can arise when our tools become too powerful. I’m not talking about some terrifying Skynet scenario, but instead what naturally happens when AI gets powerful enough that humans will struggle to “check its workings”.
In the UK, there was a recent large scandal when the accounting system Horizon that was used by the Post Office incorrectly accused about 900 employees of theft and fraud. Many of these employees were prosecuted, simply because the Post Office assumed the information Horizon was giving it was correct. In New Zealand, a woman was incorrectly identified as a shoplifter, because facial recognition software struggled to precisely identify men and women of colour, leading to a false positive.
AI tasked with making scientific discoveries could make similar mistakes, warping our understanding of the universe instead of enhancing it. And AI can just as easily ignore important information if it’s been taught to ignore those things. When in 2023 an alleged Chinese spy balloon was caught flying over the US, officials realised that their radar was set to detect things moving at the speed of missiles but not small and slow things like balloons.
When they adjusted their settings, several other balloon-like objects suddenly showed up on their radars and were subsequently shot down. Although not all of these later objects were necessarily spy balloons, it does prove that an object that behaves in an unusual way can sail right past the artificial systems designed specifically to watch out for them. AI might be very good at answering questions, but is not so good yet at knowing what question to ask.
What happens if powerful AI programs do not have sufficient oversight? What biases and assumptions could creep in, particularly for classified AI that not many people are scrutinising? It’s a concerning thought.
AI is here to stay. A digital profile is being created about you, and without proper laws and regulations being put into place, there’s not a lot you can do to stop data-broker companies from selling your personal data beyond clicking “reject all” on those cookies pop-ups on websites - not unless you’re willing to go through the arduous process of getting those companies to delete your information, or are willing to campaign to governments to put tight restrictions in place. That’s not such a bad idea.
The ability of AI like Sentient to trawl through all of that raw information, along with a host of satellite footage, and other sources, to then attempt to predict what will happen in the future, and send satellites to key locations to see if those predictions were correct, only to make even more predictions… well, it’s certainly a powerful tool. A slightly chilling, powerful tool. It makes perfect sense why we might want it.
But it will have an impact on society, and perhaps a similar impact one day on our relationship with space. Perhaps with its predictive capabilities, the only one who will see what that impact on society will be, is Sentient itself.