Tuesday, June 15, 2021
Home Blog Page 129

AI is changing how subjects are taught at universities.

A university book, glasses, iphone
Artificial intelligence is changing how education is delivered at universities.

Artificial Intelligence (AI) is transforming many human activities ranging from daily chores to highly sophisticated tasks. But unlike many other industries, the higher education sector has yet to be really influenced by AI.

Uber has disrupted the taxi sector, Airbnb has disrupted the hotel industry and Amazon disrupted first the bookselling sector, then the whole retail industry. It is only a matter of time then until the higher education sector undergoes a significant transformation.

Within a few short years, universities may well have changed beyond all recognition. Here are five ways that AI will help to change and shape the future of universities and higher education for the better.

1. Personalised learning

Universities are already using AI algorithms to personalise learning and deliver content that is suited to the students needs and pace of learning – and this is only likely to continue. This idea is built on research that shows different people have different aptitudes, skills and orientations to learn when exposed to the same content and learning environments.

Offering personalised, adaptive learning platforms recognises the diversity that is part of any learning ecosystem. This will be a significant change for universities, as it moves away from the traditional model of “one module guide for all”.

It will see educators equipped with data sets to analyse and understand the needs of individuals. And work can be automatically adapted to the style and pace of learning for each particular student.

Because everybody learns differently.
Pexels

2. Moving beyond the classroom

As educational AI develops, students will be able to study where they want, when they want and using whatever platform they want. This is likely to mean that tablets and mobile phones will become the main delivery methods.

Universities are already using AI-enabled smart building concepts to redesign learning spaces. Modern “smart” classroom spaces are now generally equipped with circular tables, laptops, flat screen monitors, multiple projectors, and whiteboards to encourage and support collaborative and engaged active learning.
This helps educators move away from a traditional classroom set-up, to a more interactive style of working, to encourage deeper learning approaches. And this will start to include more hybrid methods of learning – such as both face-to-face and online interactions.

3. Welcome to the smart campus

The Internet of Things also has the potential to transform universities into smarter places to work and learn. At its core, the technology is simple, it’s all about connecting devices over the internet and letting them talk to us, as well as each other.

Smart classrooms will also enhance the learning experience of the students. A classroom connected to the Internet of Things equipped can adapt to the personalised settings to prepare the classroom for different faculty members. Monitoring attendance and invigilating exams will also be automated and made much more robust.

This development in technology will also enable smart campuses to adopt advanced systems to automatically monitor and control every facility. Universities will be able to monitor parking spaces, building alarms, room usage, heating and lighting all very easily.

Say goodbye to traditional learning environments.
Pexels.

4. Great customer service

Universities are also using AI to streamline their processes, resulting in cost savings and better service levels – and this is something that is set to continue. A good example of this is Deakin University in Australia, which has partnered with IBM to be the first university worldwide to implement Watson. Watson is a supercomputer developed by IBM that combines AI and sophisticated analytical software to answer users’ questions.

Watson’s main functionality is to replicate a human’s ability to answer questions. This functionality uses 90 servers with a combined data store of more than 200m pages of information and processed against six million logic rules.

Deakin’s aim is to create a 24/7 online student advisory service, that will improve the student experience. Integrated with their single interface platform and online personal hub, DeakinSync enables students to ask questions and receive instant online answers.

5. Monitoring performance

Another dimension of using AI innovations in universities will be the use of block chains. This will revolutionise how universities operate, as higher education institutions use this technology to automate recognition and the transfer of credits, potentially opening up learning opportunities across universities.

Universities can also use block chains to register and record the intellectual property rights arising from scholarly research. Copyright could be notarised at the date of publication and later reuse can be tracked for impact assessments. This will transform the way universities operate and help to demonstrate the true impact that academic research can have.

Nafis Alam, Associate Professor, University of Reading and Graham Kendall, Professor of Computer Science and Provost/CEO/PVC, University of Nottingham
The Conversation

The AI revolution has begun!

The AI revolution has begun
AI, Intelligence, Aspioneer

The application of the computational analysis and learning techniques described in previous research, manifest themselves in the form of artificial intelligence (AI). AI represents the ambition to create machines that can think, learn and create solutions to problems with the same range to which the human mind can be applied.

AI is absolutely nothing new – all of us are using it every day. Every time we send an email, use a credit card or travel, or search the Internet, AI systems are the bedrock on which we perform these activities. Intelligent algorithms are constantly checking and detecting credit-card fraud, flying and landing airplanes, keeping track of inventories and even manufacturing products in robotic factories.

Genetic algorithm

These AI algorithms are based on machine learning, deep learning, artificial neural networks and natural language processing. Another component of AI is the algorithms that are used to make them. A good example of this is a genetic algorithm (GA), which is a heuristic search method used in artificial intelligence and computing based on Darwin’s theory about evolution. GA is used for finding optimized solutions to search problems based on the theories of natural selection and evolutionary biology, i.e., selection, mutation, inheritance and recombination.

Design of a genetic algorithm. Randomly generated base population with n strings of characters or bits. 1 chain corresponds to 1 chromosome.
þayo, NoJhan/Wikipedia/TCF

GA is based on the classic view of a chromosome as a string of genes. R.A. Fisher used this view to found mathematical genetics, providing mathematical formula specifying the rate at which particular genes would spread through a population.

When solving constrained and unconstrained optimization problems, a classical algorithm generates a single data point at each iteration. The sequence of data points then approaches an optimal solution. By contrast, GA uses a process similar to natural selection – it repeatedly modifies a population of individual solutions and at each step, randomly selects data points from the current population of points and uses them as “parents” to produce the children for the next generation. Over successive generations, the population “evolves” toward an optimal solution.

Although randomised, GAs are by no means random. Instead, they exploit historical information to direct the search into the region of better performance within the search space. GA effectively simulates the survival of the fittest among individuals over consecutive generations for solving a problem. Everyone represents a point in a search space and a possible solution. The individuals in the population are then made to go through a process of evolution.

It is via the juxtaposition of GA, machine learning, deep learning, artificial neural networks and natural language processing that AI can learn from data and create solutions – a fundamentally human activity. Today, these methods are used for searching through large and complex data sets to find reasonable solutions to complex issues by solving unconstrained and constrained optimization issues. They are used to solve problems that are not well suited for standard statistical models including problems in which the objective function is discontinuous, non-differentiable or highly nonlinear. They can also be used on mixed (continuous and discrete) combinatorial problems, as they are less susceptible to getting ‘stuck’ at local optima than classical gradient search methods.

Types of AI.
Futurism.com

Narrow and general AI

These advances have allowed us to leverage the gargantuan amounts of data being produced in any format to perform a plethora of actions. However, most of the AI performing these tasks are “Narrow AI” (ANI), as in they are devised and constructed to perform a very specific function. They are capable of performing isolated tasks that are computationally complex but well-defined. Examples such as these include IBM’s Watson, self-driving cars and chatbots.

These AI’s, although high-performance, are hard to generalise to new problems. They are tools that have been made for a very specific goal (e.g., win a chess match and answer customer queries). A general artificial intelligence (AGI). however, is constructed to have a more general goal – learn new things, self-improve, expand scope of functionality, or even create something new. AGI systems are explicitly designed to autonomously learn novel tasks and adapt to changing environments. They have been developed to have open ended goals. Even the programming language for AGI’s are constructed under this philosophy. Replicode is one example of an AGI language – unlike other AI languages, it is designed in the form of short parallel programs that no explicit conditional statements like “if-then” statements or loops.

Business implications

AI and AGI are being used in a host of sectors. Last year, Uber acquired the AGI startup Geometric Intelligence and intends to create a new artificial intelligence research arm at its headquarters. What Uber intends to do with the purchase is anyone’s guess and dissecting the way business plans will change with these technologies is an ongoing process. But one thing is clear, the work landscape is in a state of turbulent flux and adapting to these changes is the only way to ensure economic survival. As stated by Uber’s chief product officer Jeff Holden :

“If you look into the future, there are going to be step-function changes in artificial intelligence that will affect business models and business opportunities.”

But progress is being made and financial firms, which are already in the process of creating hedge funds that can predict price changes based on a host of data, including prices and volumes, news and social media data in various languages and other economic and accounting data at national and company levels. Tents of economic theory such as elasticity, market structure, price optimization or even aggregates in general are all exposed to profound transformation in the eyes of AI.

We are about to witness much wider implications of AI on the economy, society and business. The learning journey has just begun for humanity to confront itself with the pace of peacefully pervasion of AI and the rise of new horizons that the integration of this technology will vehemently shape at its image and resemblance.


The research of this article is sponsored by KPMG/ESCP Europe Chair Governance, Strategy, Risks, and Performance. Terence Tse and Mark Esposito are the authors of “Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends”. Kary Bheemaiah is the author of “The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory”“.

Terence Tse, Associate Professor of Finance / Head of Competitiveness Studies at i7 Institute for Innovation and Competitiveness, ESCP Europe ; Kariappa Bheemaiah, Associate research scientist Cambridge Judge Business School and lecturer GEM, Grenoble École de Management (GEM), and Mark Esposito, Professor of Business & Economics at Harvard University and Grenoble École de Management, Grenoble École de Management (GEM)
The Conversation

Blockchain technologies can enhance the security of IOT devices

Blockchain honeycomb pattern
Blockchain technologies can enhance the security of IOT devices

The world is full of connected devices – and more are coming. In 2017, there were an estimated 8.4 billion internet-enabled thermostats, cameras, streetlights and other electronics. By 2020 that number could exceed 20 billion, and by 2030 there could be 500 billion or more. Because they’ll all be online all the time, each of those devices – whether a voice-recognition personal assistant or a pay-by-phone parking meter or a temperature sensor deep in an industrial robot – will be vulnerable to a cyberattack and could even be part of one.

Today, many “smart” internet-connected devices are made by large companies with well-known brand names, like Google, Apple, Microsoft and Samsung, which have both the technological systems and the marketing incentive to fix any security problems quickly. But that’s not the case in the increasingly crowded world of smaller internet-enabled devices, like light bulbs, doorbells and even packages shipped by UPS. Those devices – and their digital “brains” – are typically made by unknown companies, many in developing countries, without the funds or ability – or the brand-recognition need – to incorporate strong security features.

Insecure “internet of things” devices have already contributed to major cyber-disasters, such as the October 2016 cyberattack on internet routing company Dyn that took down more than 80 popular websites and stalled internet traffic across the U.S. The solution to this problem, in my view as a scholar of “internet of things” technology, blockchain systems and cybersecurity, could be a new way of tracking and distributing security software updates using blockchains.

Making security a priority

Today’s big technology companies work hard to keep users safe, but they have set themselves a daunting task: Thousands of complex software packages running on systems all over the world will invariably have errors that make them vulnerable to hackers. They also have teams of researchers and security analysts who try to identify and fix flaws before they cause problems.

When those teams find out about vulnerabilities (whether from their own or others’ work, or from users’ reports of malicious activity), they are well positioned to program updates, and to send them out to users. These companies’ computers, phones and even many software programs connect periodically to their manufacturers’ sites to check for updates, and can download and even install them automatically.

Beyond the staffing needed to track problems and create fixes, that effort requires enormous investment. It requires software to respond to the automated inquiries, storage space for new versions of software, and network bandwidth to send it all out to millions of users quickly. That’s how people’s iPhones, PlayStations and copies of Microsoft Word all stay fairly seamlessly up to date with security fixes.

None of that is happening with the manufacturers of the next generation of internet devices. Take, for example, Hangzhou Xiongmai Technology, based near Shanghai, China. Xiongmai makes internet-connected cameras and accessories under its brand and sells parts to other vendors.

Many of its products – and those of many other similar companies – contained administrative passwords that were set in the factory and were difficult or impossible to change. That left the door open for hackers to connect to Xiongmai-made devices, enter the preset password, take control of webcams or other devices, and generate enormous amounts of malicious internet traffic.

When the problem – and its global scope – became clear, there was little Xiongmai and other manufacturers could do to update their devices. The ability to prevent future cyberattacks like that depends on creating a way these companies can quickly, easily and cheaply issue software updates to customers when flaws are discovered.

A potential answer

Put simply, a blockchain is a transaction-recording computer database that’s stored in many different places at once. In a sense, it’s like a public bulletin board where people can post notices of transactions. Each post must be accompanied by a digital signature, and can never be changed or deleted.

I’m not the only person suggesting using blockchain systems to improve internet-connected devices’ security. In January 2017, a group including U.S. networking giant Cisco, German engineering firm Bosch, Bank of New York Mellon, Chinese electronics maker Foxconn, Dutch cybersecurity company Gemalto and a number of blockchain startup companies formed to develop just such a system.

It would be available for device makers to use in place of creating their own software update infrastructure the way the tech giants have. These smaller companies would have to program their products to check in with a blockchain system periodically to see if there was new software. Then they would securely upload their updates as they developed them. Each device would have a strong cryptographic identity, to ensure the manufacturer is communicating with the right device. As a result, device makers and their customers would know the equipment would efficiently keep its security up to date.

These sorts of systems would have to be easy to program into small devices with limited memory space and processing power. They would need standard ways to communicate and authenticate updates, to tell official messages from hackers’ efforts. Existing blockchains, including Bitcoin SPV and Ethereum Light Client Protocol, look promising. And blockchain innovators will continue to find better ways, making it even easier for billions of “internet of things” devices to check in and update their security automatically.

The importance of external pressure

It will not be enough to develop blockchain-based systems that are capable of protecting “internet of things” devices. If the devices’ manufacturers don’t actually use those systems, everyone’s cybersecurity will still be at risk. Companies that make cheap devices with small profit margins, so they won’t add these layers of protection without help and support from the outside. They’ll need technological assistance and pressure from government regulations and consumer expectations to make the shift from their current practices.

If it’s clear their products won’t sell unless they’re more secure, the unknown “internet of things” manufacturers will step up and make users and the internet as a whole safer.

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article was originally published on The Conversation. Read the original article.
The Conversation

Central bank backed digital currencies. The future?

A collection of digital currencies
Do cryptocurrencies backed by central banks have a future.

While private digital currencies such as the bitcoin are in the news daily, countries including China and Sweden are studying the creation a new form of money – a central-bank digital currency (CBDC). The objective is to complement (or eliminate altogether) banknotes and coins. But CBDCs risk revolutionising both the way money is created and distributed and the present two-tier financial system of central and commercial banks.

Why are central banks considering the introduction of CBDCs?

Cost considerations play a role: banknotes and coins are costly to produce, distribute, handle, and replace. Currently handling costs related to cash are cross-subsidised by commercial banks’ revenues.

Banknotes allow anonymous transactions: a reduced use or elimination of banknotes would help fight illegal activities. For example, in an attempt to combat fraud and corruption back in November 2016, the Indian government launched a demonetization policy, withdrawing 86% of its currency overnight.

In Sweden, cash payments in the retail sector fell from close to 40% in 2010 to about 15% in 2016. Two-thirds of the country’s consumers now say that they can manage without cash, and more than half of all the country’s bank branches no longer conduct over-the-counter cash transactions.

Stefan Ingves, governor of Sweden’s central Riksbank, supports the creation of the “e-krona”, but stated that it’s “reasonable” for banks to continue handling money. “A ban on cash goes against the public perception of what money is and what banks do.” He also noted that for preparedness reasons, “we need notes and coins that work without electricity.”

The growing popularity of private digital currencies and the distributed-ledger payment technologies they use also have central banks on alert. They can ill afford to be left behind on the currency or the technology. The problem, Ingves recently said, was that all payments could end up being controlled by private-sector banks.

Can cash be eliminated?

The elimination of cash is currently not feasible. Not everyone has (or can have) a bank account, a credit/debit card, or access to electronic payment systems via a smart phone or computers. People cannot be forced to have or use these tools. Access to a debit/credit card might be denied to persons not deemed creditworthy. In addition, an economy entirely based on electronic payments is subject to disruption, including cyberattacks.

But there are also important conceptual issues. Banknotes issued by central banks form our base money; they are our unit of measure of value. The United States abandoned the gold standard in 1971 and today countries no longer back their currency with a more primitive form of money such as gold (an exception is Venezuela, which recently launched the “petro”, a cryptocurrency backed by the country’s oil reserves); today’s base money is fiat money whose value is maintained by trust.

Nevertheless, the largest share of the monetary mass is not in banknotes but in bank deposits. Banknotes contribute from 5% to 10% of the monetary mass depending on the country; the remaining 90-95% is formed by bank deposits. Though a bank deposit is simply a number in a computer, it is a debt redeemable on demand in banknotes, with the central banks standing ready to supply the requisite banknotes should a commercial bank not have sufficient cash on hand.

Were there no banknotes as base money, deposits would not be the debt of commercial banks with their clients but simply numbers that represent purchasing power. These numbers would appear conventionally as liabilities on the balance sheets of banks whose only obligation would be to transfer, upon request, a given sum to another entity. “Money,” that is purchasing power, might thus be in the hands of private-sector banks. Public trust in the generation and distribution of money might be shaken.

CBDCs could change the creation and distribution of money

Central banks are studying ways to eliminate banknotes while retaining their role as providers of base money. Our current banking system is two-tiered with central banks and commercial banks performing distinctly different roles. Central banks guarantee the safety and integrity of money, ensure that the monetary mass allows for economic growth, and produce the cash required by economic activity. But central banks do not deal directly with non-bank entities; commercial banks store the public’s money in accounts and transfer that money on the demand of the account’s holder.

In the current two-tier banking system, money is generated in two ways. First, money is created by commercial banks when they simultaneously extend a loan and credit an account of the same sum. Second, following the 2007-08 financial crisis, central banks have been creating money with quantitative easing (QE); since QE began, the US Federal Reserve Bank has bought over $4.2 trillion in assets. Banknotes do not enter directly in this money creation process, but they do provide the accounting underpinnings. Central banks no longer target the total quantity of money directly but target instead interest rates.

Presently, individuals and non-bank entities cannot obtain banknotes directly from the central bank but must go through commercial banks. Should central banks create CBDCs as base money there is the possibility that they allow non-bank entities or individuals to hold CBDC accounts directly with the central banks. The possibility of doing so comes from technological advances that permit distributed ledgers, a technology that allows safe peer-to-peer transfer of money without going through today’s clearing systems. Distributed ledger is used, for example, to confirm transactions in private cryptocurrencies such as bitcoin and Ethereum.

The process could go further. Should the central banks allow private non-bank entities or individuals to hold CBDC accounts directly, central banks might extend credit in their digital currency. This could have important consequences for the two-tier banking system.

This article was originally published on The Conversation. Read the original article.
The Conversation

Cybersecurity is a valid concern in the world of today

0
A phone with a stylized security application
Cybersecurity concerns are valid in the world of today

As technology evolves and Australia becomes ever-more reliant on cyber systems throughout government and society, the threats that cyber attacks pose to the country’s national security are real – and significant.

Cyber weapons now exist that can be used to attack and exploit vulnerabilities in Australia’s national infrastructure. Many of the cyber threats that exist now, such as defacing a website, are not that serious.

But more nefarious attacks on software systems have the potential to damage critical infrastructure and threaten people’s lives.

The Australian Cyber Security Centre (ACSC) Threat Report addresses these concerns every year, highlighting the ubiquitous nature of cyber-crime in Australia, the potential for cyber-terrorism, and the vulnerability of data stored on government and commercial networks.

Governments now take these types of threats so seriously, they speak of the potential for military responses to cyber-attacks in the future. As one US military official told The Wall Street Journal:

If you shut down our power grid, maybe we will put a missile down one of your smokestacks.

A securitised internet

Such concerns have been a key part of Australia’s ambitions to revamp its national security to respond to future cyber-threats. Australia’s Cyber Security Strategy, for instance, states that:

all of us – governments, businesses and individuals – need to work together to build resilience to cybersecurity threats and to make the most of opportunities online.

An important ethical concern with such a focus, however, is the risk that Australia’s cyberspace becomes “securitised”.

When we securitise an issue, we frame the activity as being conducted in a state of emergency. A state of emergency is when a government temporarily changes the conditions of its political and social institutions in response to a particularly serious emergency. This might be a natural disaster, war or rioting, for example. Importantly, due process constraints on government officials, such as habeas corpus, are suspended.

An ethical problem with a securitised or militarised cyberspace, especially if it becomes a permanent measure, is that it can quickly erode fundamental human rights such as privacy and freedom of speech.

Ethical problems in a brave new world

For instance, what are the ethical implications of conducting military activities against terrorist propaganda online, by conducting psychological operations on social media platforms, say, or simply shutting them down?

Using social media in this way would be counter to the social and civil function of these channels of communication. Trying to deny audiences the ability to speak freely on social media could also undermine the internet’s effectiveness as a tool for social and economic good. This is especially problematic in Australia, where fundamental human rights such as privacy and freedom of speech are taken for granted as fundamental civic values.

There is also potential for a militarised cyberspace to increase the likelihood of conflict between states. As cyber-attacks are a relatively new threat, it’s unclear what actions might lead to escalation and constitute an act of war.

The perception that cyber-attacks are not as harmful as, say, a missile attack could lead to their increased use. This opens the door to potentially more serious forms of conflict.

Another important ethical consideration is the enhanced government surveillance of a securitised internet. The fall-out from the Edward Snowden disclosures, for instance, revealed the intrusiveness of US security agencies’ activities online. This in turn had the effect of undermining the public’s trust in the government.

Such a loss of trust in one segment of the government can have potentially dire impacts on other areas. For example, in response to public suspicions of the actions of security agencies, governments might overreact and cut worthwhile surveillance programmes. Or disgruntled government employees (like Snowden) might leak other types of confidential or sensitive information to the detriment of the public good.

A recent example of this occurred when highly sensitive correspondences between Home Affairs Secretary Mike Pezzullo and Defence Secretary Greg Moriarty were leaked to the media. The communications detailed plans to give the Australian Signals Directorate new domestic surveillance powers. Mark Dreyfus, the national security shadow minister, labelled the leak, “a deeply worrying signal of internal struggles.”

So it is important that Australian government agencies tasked with managing national security in cyberspace consistently act in a trustworthy manner. As such, there should be guarantees that decisions related to cyber-security oversight and governance are not driven by short-term political gains.

In particular, government decision-makers should seek to promote an informed and public debate about the standards required for “minimum transparency, accountability and oversight of government surveillance practices.”

Anything short of that could make the country’s cyber-infrastructure less secure – a frightening prospect in an increasingly hostile and volatile digital world.

Dr Shannon Brandt Ford, Lecturer, Curtin University

This article was originally published on The Conversation. Read the original article.

Automation will increase safety of oil rigs in great way

An offshore Russian oil rig
Here is how automation will increase safety on oil rigs

Offshore oil rigs can be extremely dangerous places to work. Over the last few decades, several offshore explosions have led to environmental disasters and the death of workers. Regulations have so far failed to stop fatal accidents from occurring. But with developments in technology, particularly the rise of automation, we’re hoping that future accidents can be reduced.

Small offshore rigs are the subject of research for automated monitoring systems, which use a variety of wireless sensors. And, in a world first, an autonomous robot will soon be deployed to monitor equipment and inspect gas leaks on a North Sea rig. If these technologies can be combined with tougher regulations, we might have found the key to reducing future loss of property and life.

In 1988, 167 people were killed in the Piper Alpha disaster. Since then, the safety and risk assessment of offshore installations has become much more vigorous. Regulations now require duty holders and owners, such as Petrofac and Shell, to demonstrate that they have taken every possible measure to stop major accidents.

But in 2010, the offshore world suffered another disaster, when an explosion destroyed the Deepwater Horizon installation in the Gulf of Mexico. 11 people were killed and the resulting oil leak had huge environmental consequences. The cause of this disaster was a broken subsea Blowout Preventer (BOP), a piece of machinery that is used to seal, control and monitor the uncontrolled release of oil and/or gas.

Since Piper Alpha, every offshore accident has led the industry and governments to readdress the safety concerns surrounding offshore installations. Most recently, in 2016, the Obama administration outlined new drilling regulations aimed at preventing a repeat of the Deepwater Horizon disaster. These regulations require a greater number of independent inspectors and improved safety equipment.

But in the absence of a more recent major offshore disaster, the Trump administration is set to roll back these regulations with the aim of reducing “unnecessary burdens” on the industry. In reality, these changes could be a recipe for disaster. Instead of reducing offshore safety regulations, we should be expanding them.

Many current regulations are still based on “static documents”. This means that they have been rarely updated since they were introduced decades ago, and exist relatively unchanged.

The Rise of automation

The recurrence of major disasters means that we need to find a better way to predict and stop accidents before they happen. One radical approach is to rely more heavily on automation. Automated monitoring systems can range from remote sensing and recording devices to actual robots. Many different approaches have been proposed, but all with the same goal of preventing the loss of life and property.

One such approach is being tested later this year. A North Sea oil rig will deploy the first ever autonomous robot that will move around specific areas of the rig, visually inspecting equipment and detecting gas leaks. It can navigate narrow pathways and even negotiate stairways in order to fulfil its inspections. The robot will be based in areas that are considered high risk for humans, such as gas turbine modules, the equipment that provides energy to the offshore rig.

Currently, it is often humans that inspect for gas leaks, but any mistake could lead to the death of all in the vicinity. By applying autonomous systems to monitor gas leaks, we reduce the risk to humans carrying out these tasks. But more than that, because automated robots can inspect continuously, we also expect failures to occur less often.A robot will soon be used to detect gas leaks.Another approach that is being researched for smaller offshore rigs is the Asset Integrity Monitoring method, which allows for continual live monitoring of offshore sites. Sensors are deployed inside or very close to the equipment, constantly detecting and transmitting any changes. For example, a sensing network could monitor the integrity of a gas turbine by recording temperature as well as the pressure and flow of the fuel gas.

While these are already monitored on offshore platforms, in many situations they require physical inspection from a crew member. A remote monitoring system would use wireless networks to relay all of the relevant information to a central hub. Here a complete status regarding the integrity of the machine can be analysed.

This technology would give safety officials a clear picture of the whole rig and its different component parts. The information could constantly be compared with offshore regulations and assist with their enforcement. The next major step is for them to be tested and implemented on offshore platforms.

To improve safety on offshore oil rigs, the most important factor is ensuring that appropriate safety procedures are applied to appropriate systems. For example, it would be useless to deploy the autonomous robot into a low risk area to improve the safety of offshore operations. These automated systems are being developed in order to improve safety in high risk areas.

The ConversationFinding the right balance between automation and the risks posed by certain jobs will be the key to successfully introducing automation to offshore oil rigs. Whatever happens, automation will not be immediately thrust into the sector, but increasingly it looks like the future of offshore rig safety.

Sean Loughney, Postdoctoral research associate, Liverpool John Moores University and Jin Wang, Professor of Marine Technology, Liverpool John Moores University

This article was originally published on The Conversation. Read the original article.

 

AI’s reaching potential

A telephone, lying open and close
The reaching potential of AI

Google recently unveiled its latest talking AI, called Duplex. Duplex sounds like a real person, complete with pauses, “umms” and “ahhs”.

The tech giant says it can talk to people on the phone to make appointments and check business opening hours.

Duplex scheduling a hair salon appointment.
Google445 KB (download)

In recorded conversations that were played at the Google unveiling, it conversed seamlessly with the humans on the receiving end, who seemed totally unaware that they were not talking with another person.

Duplex calling a restaurant.
Google399 KB (download)

These calls left the technology-oriented audience at the Google show gasping and cheering. In one example, the AI even understood when the person it was talking to got mixed up, and was able to continue following the conversation and respond appropriately when it was told it didn’t need to make a booking.

The rise of the AI assistants

If you’ve used any of the currently available voice assistants, such as Google Home, Apple’s Siri or Amazon Echo, this flexibility might surprise you. These assistants are notoriously difficult to use for anything other than the standard requests such as to phone a contact, play a song, do a simple web search, or set a reminder.

When we speak to these current-generation assistants, we are always aware that we are talking to an AI and we often tailor what we say accordingly, in a way that we hope maximises our chances of making it work.

But the people talking to Duplex had no idea. They hesitated, backtracked, skipped words, and even changed facts partway through a sentence. Duplex didn’t miss a beat. It really seemed to understand what was going on.

So has the future arrived earlier than anyone expected? Is the world about to be full of online (and on-phone) AI assistants chatting happily and doing everything for us? Or worse, will we suddenly be surrounded by intelligent AIs with their own thoughts and ideas that may or may not include us humans?

The answer is a definite “no”. To understand why, it helps to take a quick look under the hood at what drives an AI such as this one.

Duplex: how it works

This is what the Duplex AI system looks like.

Incoming sound is processed through an ASR system. This produces text that is analysed with context data and other inputs to produce a response text that is read aloud through the text-to-speech (TTS) system.
Google

The system takes “input” (shown on the left) which is the voice of the person it is talking to on the phone. The voice goes through automatic speech recognition (ASR) and gets converted into text (written words). The ASR is itself an advanced AI system, but of a type that is already in common use in existing voice assistants.

The text is then scanned to determine the type of sentence it is (such as a greeting, a statement, a question or an instruction) and extract any important information. The key information then becomes part of the Context, which is extra input that keeps the system up to date with what has been said so far in the conversation.

The text from the ASR and the Context is then sent to the heart of Duplex, which is called an Artificial Neural Network (ANN).

In the diagram above, the ANN is shown by the circles and the lines connecting them. ANNs are loosely modelled on our brains, which have billions of neurons connected together into enormous networks.

Not quite a brain, yet

ANNs are much simpler than our brains though. The only thing that this one tries to do is match the input words with an appropriate response. The ANN learns by being shown transcripts of thousands of conversations of people making bookings for restaurants.

With enough examples, it learns what kinds of input sentences to expect from the person it is talking to, and what kinds of responses to give for each one.

The text response that the ANN generates is then sent to a text-to-speech (TTS) synthesizer, which converts it into spoken words which are then played to the person on the phone.

Once again, this TTS synthesizer is an advanced AI – in this case it is more advanced than the one on your phone, because it sounds almost indistinguishable from any normal voice.

That’s all there is to it. Despite it being state-of-the-art, the heart of the system is really just a text matching process. But you might ask – if it’s so simple, why couldn’t we do it before?

A learned response

The fact is that human language, and most other things in the real world, are too variable and disorderly to be handled well by normal computers, but this sort of problem is perfect for AI.

Note that the output produced by the AI depends entirely on the conversations it was shown while it was learning.

This means that different AIs need to be trained to make bookings of different types – so, for example, one AI can book restaurants and another can book hair appointments.

This is necessary because the types of questions and responses can vary so much for different types of bookings. This is also how Duplex can be so much better than the general voice assistants, which need to handle many types of requests.

So now it should be apparent that we are not going to be having casual conversations with our AI assistants any time soon. In fact, all of our current AIs are really nothing more than pattern matchers (in this case, matching patterns of text). They don’t understand what they hear, or what they look at, or what they say.

The ConversationPattern matching is one thing our brains do, but they also do so much more. The key to creating more powerful AI may be to unlock more of the secrets of the brain. Do we want to? Well, that’s another question.

Peter Stratton, Postdoctoral Research Fellow, The University of Queensland

This article was originally published on The Conversation. Read the original article.

 

Consumer technology is saving the environment.

0
A drone hovering over a field
New age technologies are protecting the environment. Drones, 3D printing, batteries are just few of these. Combined use of these technologies might just save the world. Learn more

Understanding Earth’s species and ecosystems is a monumentally challenging scientific pursuit. But with the planet in the grip of its sixth mass extinction event, it has never been a more pressing priority.

To unlock nature’s secrets, ecologists turn to a variety of scientific instruments and tools. Sometimes we even repurpose household items, with eyebrow-raising results – whether it’s using a tea strainer to house ants, or tackling botfly larvae with a well-aimed dab of nail polish.

But there are many more high-tech options becoming available for studying the natural world. In fact, ecology is on the cusp of a revolution, with new and emerging technologies opening up new possibilities for insights into nature and applications for conserving biodiversity.

Our study, published in the journal Ecosphere, tracks the progress of this technological development. Here we highlight a few examples of these exciting advances.

Tiny tracking sensors

Electronically recording the movement of animals was first made possible by VHF radio telemetry in the 1960s. Since then even more species, especially long-distance migratory animals such as caribou, shearwaters and sea turtles, have been tracked with the help of GPS and other satellite data.

But our understanding of what affects animals’ movement and other behaviours, such as hunting, is being advanced further still by the use of “bio-logging” – equipping the animals themselves with miniature sensors.

Many types of miniature sensors have now been developed, including accelerometers, gyroscopes, magnetometers, micro cameras, and barometers. Together, these devices make it possible to track animals’ movements with unprecedented precision. We can also now measure the “physiological cost” of behaviours – that is, whether an animal is working particularly hard to reach a destination, or within a particular location, to capture and consume its prey.

Taken further, placing animal movement paths within spatially accurate 3D-rendered (computer-generated) environments will allow ecologists to examine how individuals respond to each other and their surroundings.

These devices could also help us determine whether animals are changing their behaviour in response to threats such as invasive species or habitat modification. In turn, this could tell us what conservation measures might work best.

Autonomous vehicles

Remotely piloted vehicles, including drones, are now a common feature of our skies, land, and water. Beyond their more typical recreational uses, ecologists are deploying autonomous vehicles to measure environments, observe species, and assess changes through time, all with a degree of detail that was never previously possible.

Coupling autonomous vehicles with sensors (such as thermal imaging) now makes it easier to observe rare, hidden or nocturnal species. It also potentially allows us to catch poachers red-handed, which could help to protect animals like rhinoceros, elephants and pangolins.

3D printing

Despite 3D printing having been pioneered in the 1980s, we are only now beginning to realise the potential uses for ecological research. For instance, it can be used to make cheap, lightweight tracking devices that can be fitted onto animals. Or it can be used to create complex and accurate models of plants, animals or other organisms, for use in behavioural studies.

Bio-batteries

Keeping electronic equipment running in the field can be a challenge. Conventional batteries have limited life spans, and can contain toxic chemicals. Solar power can help with some of these problems, but not in dimly lit areas, such as deep in the heart of rainforests.

“Bio-batteries” may help to overcome this challenge. They convert naturally occurring sources of chemical energy, such as starch, into electricity using enzymes. “Plugging-in” to trees may allow sensors and other field equipment to be powered cheaply for a long time in places without sun or access to mains electricity.

Combining technologies

All of the technologies described above sit on a continuum from previous (now largely mainstream) technological solutions, to new and innovative ones now being trialled.

Illustrative timeline of new technologies in ecology and environmental science. Source and further details at DOI: 10.1002/ecs2.2163.
Euan Ritchie

Emerging technologies are exciting by themselves, but when combined with one another they can revolutionise ecological research. Here is a modified exerpt from our paper:

Imagine research stations fitted with remote cameras and acoustic recorders equipped with low-power computers for image and animal call recognition, powered by trees via bio-batteries. These devices could use low-power, long-range telemetry both to communicate with each other in a network, potentially tracking animal movement from one location to the next, and to transmit information to a central location. Swarms of drones working together could then be deployed to map the landscape and collect data from a central location wirelessly, without landing. The drones could then land in a location with an internet connection and transfer data into cloud-based storage, accessible from anywhere in the world.

Visualisation of a future smart research environment, integrating multiple ecological technologies. The red lines indicate data transfer via the Internet of things (IoT), in which multiple technologies are communicating with one another. The gray lines indicate more traditional data transfer. Broken lines indicate data transferred over long distances. (1) Bio-batteries; (2) The Internet of things (IoT); (3) Swarm theory; (4) Long-range low-power telemetry; (5) Solar power; (6) Low-power computer; (7) Data transfer via satellite; and (8) Bioinformatics. Source and further details at DOI: 10.1002/ecs2.2163.
Euan Ritchie

These advancements will not only generate more accurate research data, but should also minimise the disturbance to species and ecosystems in the process.

Not only will this minimise the stress to animals and the inadvertent spread of diseases, but it should also provide a more “natural” picture of how plants, animals and other organisms interact.

Realising the techno-ecological revolution will require better collaboration across disciplines and industries. Ecologists should ideally also be exposed to relevant technology-based training (such as engineering or IT) and industry placements early in their careers.The ConversationSeveral initiatives, such as Wildlabs, the Conservation Technology Working Group and TechnEcology, are already addressing these needs. But we are only just at the start of what’s ultimately possible.

Euan Ritchie, Associate Professor in Wildlife Ecology and Conservation, Centre for Integrative Ecology, School of Life & Environmental Sciences, Deakin University and Blake Allan, , Deakin University

This article was originally published on The Conversation. Read the original article.

Is a hyper-connected workforce leading to a intelligent or a vulnerable workplace?

Three people sitting on a bench watching a wall of profiles
Is a hyper-connected world making doing business difficult. Learn more...

The state of being called ‘the hyper-connected’ describes the demand coming from a new generation of employees who believe that digitally connected communication replacing paperwork with mobile forms simplifies the data collection and retrieval process. This helps the staff to work conveniently, open for communication to quickly resolve any issue, therefore they wish to remain connected whenever and wherever.

Connecting people across your company enhances accuracy, productivity, boosts collaboration, provides greater business agility and facilitates speedy communication between employees and customers.

But the shift to a digitally connected work environment doesn’t just happen. The younger generation (under 35 years) feel that there is a lack of necessary tools available while the older generation (above 50 years) feel they lack the skills to do so. Furthermore for using all the tools employees might need to be present in the workplace itself.

But the most important issue is that of security. Since being hyper-connected makes heavy use of the internet, broadband, camera phones, instant messaging, chances of security breaches increases.

Organisations therefore become more and more vulnerable due to enlarged attack surfaces, network to network bridgeheads and applications that take advantage of devices 24 hours connected to internet, which may lead to cyber attacks and leaks of confidential data.

No doubt, there are risks, but we should not resist the hyper-connected world. Instead companies should adapt the right management choices/tools. Companies should aim to develop more adaptable and flexible workspaces while taking care of security issues. This will allow them to leverage the potential of digital world for a brighter future.

Excess of anything is bad even water

A boat floating in the clear waters of Polynesia
Water poisoning is a real thing

Ever wondered if there can be too much of a good thing? Oh yes! water intoxication, water poisoning, hyper-hydration or water toxaemia is a condition when drinking too much water in short period of time, the kidneys cannot flush it out fast enough, the blood becomes waterlogged. With decreasing amount of salt (sodium/potassium balance), cells adjust by absorbing more and more water. This imbalance of electrolytes in body can be fatal for brain functioning. The swollen brain cells can press the nerves and seal blood vessels leading to lack of oxygen.

The early symptoms include nausea, headache and if untreated it may cause muscle weakness, spasms, seizures, unconsciousness, coma and in extreme cases death due to brain oedema.

Over-hydration can effect following categories: marathon runners, endurance cyclists, army recruits, hikers, infants (low body mass), miners (working in extreme heat and humidity) people using drugs such as MDMA, abnormal renal failure/ kidney dysfunction, compulsive water drinking due to mental illness or water-drinking contest, undergoing strenuous exercise.

Staying hydrated is important but determining how much can often be hard to comprehend. However your urine is a good indicator of your hydration status. Pale yellow urine is a good sign. Darker urine means you are dehydrated while if you are peeing, peeing and peeing out clear colourless water means you are over-hydrated. Experts suggests more than 1.5 litres of water per hour can be lethal and on an average 9-13 cups of water (80-100 ounces) per day is ideal.

Thus be those be warned who are trying to lose weight quickly and resorting to heavy strenuous exercising routines. Staying hydrated is important but moderation is the key. If you are pushing fluids beyond the point it is comfortable maybe its the time to stop drinking – ‘water’.

RECENT POSTS