Not a day goes by without a mention of 5G enabling something or other, yet, most of the devices we are installing in cars are 4G devices, replacing 2G and 3G devices. Many vehicles I have driven from Volvo, BMW, Mercedes and more have radar / lidar in the front and numerous cameras and promise some of the lower levels of car self driving (there were 5 at the last count, if you car to google them and disappear down that rabbit hole)… however they didn\’t even managed to stay in lane on a motorway with perfectly maintained lanes and lines, not even enough to make for example eating a sandwich more comfortable, let alone automated while I keep watch, as the marketing hype suggested I could.
So if \”lane maintenance\” cannot even reliably keep a car in a lane in a motorway for a few seconds to bite a sandwich in peace, let alone take the hand away from the wheel, or even reliably alert to lanes in a motorway, what is the point of the massive computer under the seat / in the boot, etc consuming all that power? how is 5G going to solve this? Well the biggest problem is data and reference points: We learn from watching, storing and sharing information. Mst of today\’s vehicle systems watch, analyse and throw away or store to be overwritten. This is like trying to learn and teach while being a hermit with amnesia. So in theory 5G can come in here by enabling that data to be sent somewhere where it can be stored, analysed, exceptions handed off to a human with real intelligence vs. AI (think voice recognition was useless until the exceptions were centralised and humans analysed them, categorised them and machines learned (and still learn) from this.
However is this a useful use of (expensive) resources, when the lines are going to get dirty, be repainted, covered with skid marks, dirt, etc. when arguably we should just take the time to eat the sandwich out of the car and keep our hands on the wheel anyway… which brought me to the reason we are taught, certainly in advanced driving, to keep both hands on the wheel, in a position in which we can apply leverage to the wheel as we never know what may happen; a blow out, the car step out, or indeed an animal step out; the infamous moose test! This coincided with a news article of a self-driving car not even seeing a tram in a city it was developed to drive in and so should have, you know the ability to identify a tram.
And this is where not just network speed, sending data and storing data comes in, but also the analysing of data. See, people think a camera is a camera, but they are not (see my other article coming out on the use of smart glasses for AI and AR from work we are doing in logistics and factories with 5G), however a static camera makes AI a LOT easier, as there are many constants; like the size of the object in relation to its distance. with a vehicle this is more difficult as, well the vehicle, and therefore the camera is moving along at least one or two axis. Take this to a headcam and the variables become literally mind boggling, and as a result AI boggling too.
Which brings us back to the infamous moose test, or even let\’s call it a tram test. If we take a number plate: something that for a given country is the same size, shape and colour, with standard letters and font; while a static camera on a building may resolve these in high ninety percents, a moving car will be in low nineties in good day light, with few dead flies on the lens, mainly because of the varying size of the plate in the first place. As a crash course in AI, ANPR usually identifies a vehicle, then a plate and from there will apply the OCR. So if something as uniform as a number plate confuses, imagine a moose. Then there is the unpredictability of some herbivores, including moose.
However let\’s imagine that the camera on a moving car evolves to 100% number plate detection and then 100% moose identification, and that said moose moves predictably into our path on a moose test… any of you who can do a manual or wheelie on a bike for any period of time, or balance a car in a drift, let alone on its side wheels; would you care to outsource this to a computer? the amount of processing required could very quickly overwhelm even the fastest processors and networks, so what difference does 5G make here, or even future 10G?
Well, I suppose we have to start somewhere, and to learn we first need to share, analyse and start learning, and that is where not just the faster speed, but the lower latency of 5G come in, and I dare say a lot of the hard learning, sharing and analysis will be done on Private 5G.
What #machinelearning and #AI needs most in this space, and many related spaces is #data: #real, varied, #histroric and #realtime data – no matter how intelligent a person is; without an education, sharing and analysis… and even then… #newmoosetest working in AI, may not be able to work in all the scenarios a non artificial moose test actually happens, no matter how fast the network or how low its latency.