Posts

Introduction:

For decades, engineers have experimented with self-driving automobile prototypes. The concept is simple: equip a car with cameras that can track all of the things in its immediate vicinity and have the car respond if it is going to drive into one. Teach in-car computers the laws of the road before releasing them to find their own way to their destination. 

This straightforward summary conceals a great deal of intricacy. Driving is one of the more difficult tasks that humans engage in on a regular basis. Following a set of rules isn’t enough to drive like a person, because humans do things like establish eye contact with other drivers to affirm who has the right of way, respond to weather conditions, and make other decisions that are difficult to codify in rigid rules.

With automated driving systems on pace to improve much more, it’s logical to believe that the technology will be a tremendous boon for organisations that are well-positioned to benefit. According to Business Insider Intelligence, by 2020, there will be around 10 million automobiles on the road with automatic navigation technology. According to McKinsey, around 15% of autos sold in 2030 will be completely driverless. Intel predicts that by 2050, the worldwide market for driverless vehicles, or “the passenger economy,” would be worth more than $7 trillion per year. With an expected 300 million autonomous vehicles on the road in 2050, Morgan Stanley estimates that providing service to driverless vehicles would be a $200 billion annual market for telecom companies.

Before the year 2000, the road to self-driving automobiles was paved with incremental automation technologies for safety and convenience, such as cruise control and antilock brakes. Advanced safety systems, such as electronic stability control, blind-spot recognition, and collision and lane change alerts, were available in automobiles after the century. Sophisticated driver assistance features like as rearview video cameras, automated emergency braking, and lane-centring assistance debuted between 2010 and 2016. Automobile manufacturers have attained Level 4 of autonomous driving technology as of 2019. Before completely autonomous cars may be purchased and operated on public roads in the United States, manufacturers must cross a number of technology hurdles and solve a number of critical challenges. Even though Level 4 autonomous vehicles are not ready for public usage, they are employed in various ways.

LevelsDefining Characteristics
Level 0The driver is responsible for all core driving tasks. Level O vehicles may still include features like automatic emergency breaking. blind-spot warnings, and lane departure warnings.
Level 1Vehicle navigation is controlled by the driver, but driving-assist features like lane centering or adaptive cruise control are included.
Level 2Core vehicle is still controlled by the driver, but the vehicle is capable of using assisted driving features like lane centering and adaptive cruise control simultaneously.
Level 3Driver is still required but is not needed to navigate or monitor the environment if certain criteria are met. However, the driver must remain ready to resume control of the vehicle once the conditions permitting ADS are no longer met.
Level 4The vehicle can carry out all driving functions and does not require that the driver remain ready to take control of navigation. However, the quality of the ADS navigation may decline under certain conditions such as off-road driving or other types of hazardous situations. The driver may have the option to control the vehicle.
Level 5The ADS system is advanced enough that the vehicle can carry out all driving functions, no matter the condition. The driver may have the option to control the vehicle.

The need for Self-driving vehicles:

  • Improved safety would be the most often acknowledged advantage. In the United Kingdom alone, there were 1,770 traffic deaths reported last year, with over 26,000 people seriously wounded. The statistics in the United States are considerably worse, with 36,750 pedestrian and biker deaths. With human error accounting for the bulk of accidents, even a 90% adoption rate of self-driving cars might result in the annual saving of 22,000 lives.
  •  Another benefit would be that it would provide transportation to persons who have previously been denied it: children, the disabled, and the elderly could theoretically travel without a driver, expanding accessibility.
  • Up to 60% of hazardous pollutants can be reduced by self-driving automobiles. Furthermore, these automobiles can be designed to optimise potential reductions, which is fantastic news for environmentalists and everyone who wants to have the least possible influence on Mother Nature.

Indian Outlook:

When it comes to the deployment of self-driving cars, India cannot afford to turn a blind eye. On the one hand, given that urban Indians spend 1.5 hours more each day in traffic than their Asian neighbours, it has the potential to revolutionize the way we happen to be living, aiming to make our roads safer, reducing traffic congestion, and improving efficiency. They could, on the other hand also be overburdened under the country’s most congested traffic circumstances, and they may fail to function correctly in the absence of the infrastructure changes required to normalise their presence. While autonomous vehicles would also allow users to make better use of their travel time instead of squandering it on driving. They have the potential to save lives by minimising human errors, which are the major cause of traffic collisions. Driverless cars are also a terrific method for people who are physically unable to drive themselves to go about. While autonomous vehicles may take longer to reach Indian roads, self-driving tractors and trucks have already taken the first step. Companies like Escorts, Mahindra & Mahindra, and Flux Auto are intending to debut them soon.

Several Indian start-ups are developing AV solutions for trucks, minibuses, and vehicles, with the goal of exporting to other nations in certain circumstances. Infosys claimed a few years ago that it had created a “driverless” cart at its Mysuru centre in southern India. India’s ingenuity and technology are its greatest assets. It has the potential to become the world’s top supplier of autonomous vehic  le technology and to nurture a whole “SV for AV” — a Silicon Valley for Autonomous Vehicles. The objective is to develop unique and niche uses for AVs, such as in-plant logistics, on-campus movement, and public transportation, where they may bring value, safety, and efficiency.

It is a bit risky to deploy autonomous automobiles in a labor-intensive country like India, where the percentage of the working population, especially unskilled labour, is large. But it shouldn’t stop us from taking use of cutting-edge technology. Furthermore, autonomous cars are now only effective on highways and face difficulties on regular routes. As a result, it is preferable to allow autonomous cars in India once the technology has matured.

Conclusion:

It’s practically hard to predict whether fully autonomous vehicles will eventually replace human-driven automobiles. However, technology has advanced tremendously in recent years, such as the Budweiser driverless truck that delivered beer in Colorado in 2016, and the rate of technical growth has not slowed since the Renaissance, so it’s reasonable to predict we’ll see more autonomous cars in the future.

Humans have been developing tools and technologies to help us achieve our goals since the beginning of time. Huge technological breakthroughs have resulted in large adjustments in social structures, as well as how people contribute to society and make a livelihood. Today’s technological advancements are fast allowing much of the labour that is now done by people to be automated. This applies to both blue-collar and white-collar employment, thanks to robotics and the Internet of Things, as well as artificial intelligence. The widespread application of these technologies has sparked widespread worry about job loss. Although technology has almost always made certain occupations obsolete, it has also created new ones. Technology is a collection of tools that are employed in a variety of ways to boost productivity. Some occupations were lost as a result of the Industrial Revolution, but many more were created. It also enhanced society’s overall wealth and began to develop a middle class that could benefit from health, education, and other services previously solely available to the wealthy. It can be difficult to forecast what kind of employment will be created and in what quantities as a result of this new revolution, making the situation appear worse than it is.

This illustration may appear unduly hopeful to some. The new positions need an entirely different skill set – an assembly line worker cannot be transformed into a data scientist overnight, if at all. Despite the fact that the Industrial Revolution lasted several decades, it resulted in immense social upheaval, discontent, and severe hardship for many people. The digital revolution might unfold far more quickly, affecting enormous swaths of a complex, interconnected economy with strong feedback loops. Artificial intelligence (AI) will play a bigger role in how we live, work, and play in the future. However, we are still a long way from a day when computers will completely replace the global workforce, particularly invocations that need parts of the human brain (perception, social intelligence, and creativity). As technology’s breadth and capabilities develop, those that embrace and leverage AI to incorporate efficiency into routine operations while infusing human talents and knowledge into the system are likely to gain the most. Rather than taking an all-or-nothing strategy to boost abilities, embrace the human-in-the-loop method and benefit from the one-two punch of artificial and human intellect while planning for the future.

Whilst, AI still falls short – Surgery is best left in the hands of skilled surgeons, the fine motor skills and capacity to recognise and appraise each scenario are considerably superior to that of any machine. Similarly, AI cannot replace the amount of social intelligence required for HR professionals to connect with candidates and workers and develop healthy relationships. Finally, robots will never be able to match an attorney’s inventiveness or intelligence when it comes to drafting, negotiating, or enforcing complicated contracts. Humans in the loop aids workflow which further combines artificial and human intelligence to complement people and generate a better result than either could separately. To obtain the desired outcome, a person working alongside a machine or computer enters data into the system. Humans train, tune, and test algorithms that get smarter and more accurate over time, and this process form a continuous cycle. Complex AI systems have emerged into tools that are more powerful and efficient than what could be accomplished with purely automated or totally manual systems alone by adding human judgement and preference into the loop.

In order to address the issue of whether AI systems will be able to replace humans, one must first recognise that human psychology and an AI system are fundamentally different.

While both work with cognitive processes including problem-solving, memorization, planning, reasoning, and perception gathering, the human mind is significantly more competent in these simple activities. The human brain is capable of incorporating emotional intelligence, self-awareness, and human experience into various activities, giving each one a distinct flavour. On the other hand, the AI system is currently too immature to learn on its own. Humans must still teach it with data sets in order for it to accomplish various jobs. At the end of the day, artificial intelligence is a creation of the human intellect. Because of human inventiveness, total automation of certain activities is now conceivable. Even while the topic of whether or not humanity will be replaced by AI remains unanswered, we may be comfortable that, for the time being, AI is nowhere near to reaching the level of technological maturity required to take over the human race.

Conclusion:

The subject of AI replacing humans in many areas may never have a definitive solution. Predictions and observations are all we have. Before the negative impacts of AI technology grow too great, it needs ongoing supervision and inspection. AI takeover will be restricted to the exciting tales of dystopian movies and fictitious worlds if adequate rules are in place to protect users. It’s never too early to start thinking about the future. People must challenge themselves to comprehend the data and automation technologies on the horizon now in order to be prepared for tomorrow’s advancements in automation. However, capturing value from automation needs more than just data and technological know-how. The biggest obstacles will be the personnel and organisational adjustments that leaders must implement when automation upends whole business processes, as well as the culture of businesses, which must come to see automation as a dependable productivity lever. Senior executives, on the other hand, will have to “let go” in ways that go against a century of organisational growth.

The manner in which war is fought is evolving. While technological advancements in the twentieth century helped to level the playing field between states, rapid progress in the recent two decades has made it clear that possessing a state’s money, or even being a state, is no longer a need for influencing global politics. Technology has been the great equaliser and driving force in bringing new participants into the field of battle. It’s unsurprising that we’re seeing a shift in how western countries are organising and developing their defence forces in response to these new threats. The capacity of a country to use its scientific and technology foundation to investigate, experiment, analyse, and exploit new technologies, methods, and tactics will be critical to its operational advantage, security, and prosperity in the future. A national defensive strategy must also ensure independence in defence development and procurement. As technology becomes more democratised, it is no longer only countries with vast financial resources that may arm themselves with weapons of war; it can now be done on an individual level as well. These aren’t your typical gun or tank; instead, they’re non-unique computers that enable bad actors the power to inflict harm on their targets, whether they’re governments or proxies for governments, through skilful usage. This type of behaviour is now considered to occur in the grey zone, which is a space where bad actors can target political, economic, and military tools without triggering a traditional reaction or even being recognised as formal acts of aggression.

Britain’s military has been focused on transitioning towards a more deadly, hi-tech, and drone-enabled combat. A year after formally exiting the EU, the United Kingdom is attempting to define and establish its new role in the world. An implication towards using their unique chance to use their knowledge of science and technology for the improvement of defence and security capabilities, demonstrating their capacity to remain a reliable partner to its European and global allies. It has also promised to invest in cyberspace, chemical, biological, and radioactive technologies, innovative weapons, and system integration.

The UK’s capacity to combine enhanced training with faster adoption of science and technology, particularly those currently in use by its adversaries, will be a vital boost to its preparation. The UK, which has a history of being hesitant to adopt new technologies, has to do more to speed up adoption and guarantee that troops have the tools they need to train and build skills to face more complex technology-based assaults. To accomplish this, a road must be established for the rapid transfer of commercial sector technology to military and security applications in order to strengthen the military’s ability to respond effectively to attacks that originate in the civil sector. Being an early adopter of disruptive types of technology will put the UK ahead of its enemies and give it a leading position among its allies in this field.

Experimentation has grown increasingly common as a result of the success of Silicon Valley’s rapid prototyping and innovation cultures — fail fast, learn quickly, and improve. Several multidisciplinary international exercises, such as the Unmanned Warrior exercise, which offered a testing ground for unmanned systems, and Formidable Shield, which tested eight NATO nations’ defence capabilities against ballistic missiles, have already demonstrated its utility in defence. These help to speed up the development and integration of new technologies and operational ideas by allowing them to be tested in a safe and controlled environment. Personnel may train with limited or high-value assets using virtual and constructive simulations, and real training capabilities can be customised to match changing operational demands. Throughout, a technology-agnostic approach should be followed, with diverse manufacturers’ training systems, simulators, and equipment being integrated to provide the most effective synthetic representation feasible.

By the conclusion of the first year, the country hopes to have made significant progress on this plan. Creating a strategy implementation plan, establishing clear policy views on the important capabilities that the government must maintain, offering direction to academics and businesses on priority areas, and renewing the government’s technological incubation programme are all examples of progress. However, more engagement with people in the industry is required to properly incorporate innovations from the civic sector. Close communication and collaboration between all stakeholders are crucial to ensure development and innovation stay mission-focused. Defence and security services have firsthand knowledge of their operational issues, while academics and industry are always investigating new solutions. And it all boils down to the necessity for a more modernised training programme, both in terms of methodologies and technologies. Training collaborations with businesses and allies will be able to provide the tactical training required to face genuine threats while also strengthening cross-government, inter-Service, and international cooperation. Defence enterprises must work together to agree on common standards and principles for the use of collaborative environments, threads and twins. Only once this is understood, and collaborative culture is embraced, can the time-saving, cost-saving, and performance-enhancing benefits of collaborative training be realised.

Conclusion:

The United Kingdom’s new warfare strategy is in keeping with the direction in which warfare is moving, as well as the manner necessary to confront the growing number of threats. The introduction of science and technology as a pillar will provide the necessary foundation for building a successful defence in the new world’s workings. To confront adversaries that increasingly employ modern media to launch attacks, a well-developed technological capability will be necessary. The first step toward a strong future defence is to recognise the possibilities of science and technology. Strengthening the United Kingdom’s leadership in science and technology offers a foundation on which it can stand shoulder to shoulder with other nations as it redefines its position in the international arena.