Sunday, December 16, 2018
Home Blog

Stablecoins: Can be a more stable cryptocurrency option

Bitcoin has plunged from a high of almost US$20,000 in December 2017 to as low as US$3,675. So it’s understandable that some cryptocurrency users might be looking for more stability. With the future of Bitcoin and other cryptocurrencies uncertain, a possible new solution known as “stablecoins” has emerged. This cryptocurrency aims to hold its value better than others, which could offer investors more stability.

Cryptocurrencies are digital tokens that act as a form of currency, effectively allowing people to perform transactions without a bank or intermediary. Most cryptocurrencies have no intrinsic value, and get their price from what others will pay. This, alongside price speculationfrom those hoping values will rise, has led to significant price volatility in cryptocurrencies.

Unlike other cryptocurrencies, stablecoins aim to maintain their worth better by being redeemable for something else of tangible value, like regular fiat currencies such as US dollars, or even gold.

The stablecoins’ underlying asset (the monetary value that investors expect it to trade at) would normally be deposited with a trusted bank. If people are confident they can redeem these coins in exchange for said currency, and that the issuer has sufficient reserves for all coins in circulation, the price of the stablecoin shouldn’t fall below the underlying asset value.

The most widely used stablecoins are TetherTrueUSD and USD Coin, which bind their value to the US dollar. Tether experienced some short-term volatility, fluctuating between $0.989 and $0.95. TrueUSD has held stable, but USD Coin has had slight instability – though even its biggest drop still remained within 1.8% of the dollar. Compared with other cryptocurrencies, then, stablecoins have remained stable.

But there’s nothing technical keeping the price of stablecoins at a fixed value. If people lose confidence that the issuer has enough assets reserved to honour the value of all coins if redeemed, it could lead to significant price variations. The price could also rise if demand outstrips supply of a stablecoin.

Why are stablecoins becoming popular?

The recent crash of Bitcoin and other cryptocurrencies, alongside inconsistent trading prices across exchanges, have influenced the perception that cryptocurrencies are unpredictable. The idea of a cryptocurrency with a fixed value has understandable appeal, especially among those wanting to make purchases with cryptocurrencies.

Cryptocurrency exchanges are also moving away from interacting with banking systems because of heightened regulatory interest and attention in cryptocurrency operations. In some notable cases, exchanges have even had their funds frozen by banks. This has led some popular cryptocurrency exchanges to no longer allow transactions between cryptocurrencies and real money. So, in order to buy on these exchanges, people need existing cryptocurrencies – making stablecoins a good option for starting out.

Will computer algorithms maintain stability?

Seigniorage-based stablecoins are the latest development. These use computer algorithms to control the stablecoin’s availability by buying and selling it automatically based on real-time prices, ideally keeping the coin’s price stable. If prices rise, coins from reserves would be made available to buy, which increases supply and reduces price. If the price falls, the algorithm can buy back coins (using other cryptocurrencies held in reserves) to reduce supply and increase the price.

But if supply increases too rapidly, the algorithm won’t have sufficient funds to buy back enough coins to stabilise the price. This could cause the value to plummet, especially if people lose confidence in the coin issuer. However, this can also happen to regular fiat currenciesnot just stablecoins, as currencies are only valuable if others will accept it – otherwise, it significantly loses worth.

The future

Stablecoins might present a solution to short-term volatility, provided the currency backing its value remains stable in worth. But they won’t fix confidence losses, especially if the value of the stablecoin’s reserved assets is questioned. If the ability to redeem this currency is at risk, the stablecoin’s price will likely fall.

Seigniorage-based cryptocurrencies may handle limited volatility if they have enough reserves to control supply with algorithmic buying and selling. But this still requires people to willingly hold or accept the coin. Flash price crashes that occur when lots of a cryptocurrency is sold in a short time are not unheard of, showing the real potential for extreme volatility due to large transactions.

There’s also a significant premium for using stablecoins to purchase other cryptocurrencies. At time of writing, it cost almost US$118 per unitmore to buy one Bitcoin using Tether than US dollars, despite both supposedly having the same underlying value. If the market saw stablecoins as a solution to cryptocurrency volatility, the price would be the same as it is with cash.

While stablecoins might reduce the amount of risk buyers see in cryptocurrency, especially related to price instability, it’s unlikely they’ll actually be used more generally.

Using stablecoins for day-to-day transactions has many challenges, especially if the system can’t make more coins available if demand increases. Stablecoins also aren’t protected by the compensation schemessome cash bank accounts are, making it unlikely most people will replace their cash accounts.

Regular cryptocurrencies also offer potentially higher returns than stablecoins, which appeals to risk-takers. Major investment banks are also exploring ways to take advantage of cryptocurrencies’ price volatility, as this creates more opportunity for profit and will attract investors.

This isn’t to say stablecoins have no future. People living in countries with unstable local currencies could use stablecoins to digitally hold a more stable foreign currency. However, while stablecoins could be more secure than real currencies in some situations, the values will still fluctuate if people lose confidence in their worth.

Despite the volatile market, cryptocurrencies like Bitcoin remain popular with investors and ordinary people hoping to become Bitcoin millionaires. While stablecoins might seem a shrewd alternative, it’s unlikely people will trade their chance to earn millions for security.

This article was originally published on The Conversation. Read the original article.

What is edge computing and why does it matter?

Time travel to the UK in 2025: Harry is a teenager with a smartphone and Pauline is a senior citizen with Alzheimer’s who relies on smart glasses for independent living. Harry is frustrated his favourite online game is slow, and Pauline is anxious since her healthcare app is unresponsive.

Forbes predicts that by 2025 more than 80 billion devices, from wearables and smartphones, to factory and smart-city sensors, will be connected to the internet. Something like 180 trillion gigabytes of data will be generated that year.

Currently almost all data we generate is sent to and processed in distant clouds. The cloud is a facility that provides virtually unlimited computer power and storage space over the internet. This mechanism is already becoming impractical, but by the time billions more devices are connected, delays due to congested networks will be significant. Harry and Pauline’s frustrations will be the norm as apps communicate with distant clouds over a busy internet, becoming slower and less responsive.

Disruptive technology

After all, seconds matter. Harry will have a poor gaming experience if there is a 50 millisecond delay on his smartphone. Even a 10 millisecond lag between the movement of Pauline’s head and the appearance of processed information on the smart glasses will cause motion sickness.

To imagine another futuristic scenario, a delay of one-tenth of a second could prove disastrous for an autonomous car driving at 70 miles per hour. It is not inconceivable therefore, that limitations in current cloud provisions could lead to life-or-death scenarios for users. For cloud users to operate in real-time, experiencing delays of no more than one millisecond – assuming networks worldwide can transmit data at the speed of light – data will need to be processed less than 93 miles from the user.

Edge computing is a disruptive new technology, still in its infancy, which offers a solution. Delays will be reduced by processing data geographically closer to the devices where it is needed, that is, at the edge of the network, instead of in a distant cloud. For example, smartphone data could be processed on a home router, and navigation guidance information on smart glasses could be obtained from a mobile base station instead of the cloud.

Will this really happen?

The value of edge computing is to make applications highly responsive by minimising delays. This compelling proposition has attracted significant investment from major companies, including Cisco, Dell and Arm, all of whom have a major global footprint. The market is headed towards embracing the edge, and researchers across universities are closely examining and developing this new technology.

Cost-effective application will require the edge to do a lot of data pre-processing before it is sent to the cloud. Proof-of-concept in evidence from pilot projects demonstrates that a variety of applications benefit from using the edge including online games, health care apps, military applications and autonomous cars.

A number of alliances, such as the OpenEdge and OpenFog consortia are developing standards for using the edge. Even major cloud providers, including Amazon and Microsoft Azure, have developed software systems for using the edge. The market is estimated to be valued between US$6-10 bn over the next five years.

Will the cloud become obsolete?

Cloud data centres are facilities concentrated with processing and storage capabilities across the globe. They are one of the central planks of modern economies. Today they are required as critical infrastructure because very little processing can be done between the user device and the cloud; but once processing is done at the edge, the central role of the cloud will change.

The massive storage and scalable resources available in the cloud will obviously not be accessible at the edge with its limited computing and storage capabilities, but the edge will become central for real-time processing. The edge will not have an existence of its own without the backing of the cloud, but the cloud will become a slightly more passive technology since resources required for processing and/or storage will be decentralised along the cloud/edge continuum.

Safety

Jason Bourne always managed to outsmart his assailants by blending in with a large rioting crowd or a busy marketplace. Thousands of cloud data breaches affecting billions of people were reported in 2017. A home router is a needle in a haystack of devices at the edge – which, even if compromised, would not give access to billions of users’ data. So that alone is a huge plus, as mass breaches can be avoided.

Processing a user’s data on servers located on a home router without leaving a data footprint outside the home network is more secure than leaving the entire data on the cloud. More public edge devices, such as internet gateways or mobile base stations, will have the data footprint of many users. So the systems required to fully protect the edge are still a major investigative focus.

Questions remain to be answered throughout the adoption process, but the inevitable conclusion is clear: the edge will change not only the cloud’s future, but also those of us – like Harry and Pauline – who depend on it every day.

This article was originally published on The Conversation. Read the original article.

 

 

Why companies should be paying to protect biodiversity

In the “The Lorax,” an entrepreneur regrets wiping out all the make-believe truffala trees by chopping them down to maximize his short-term gains. As the Dr. Seuss tale ends, the Once-ler – the man responsible for this environmental tragedy – tells a young child that “Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.”

Likewise, many corporations that profit from nature’s bounty, such as Unilever, Patagonia and Interfaceappear to be reaching a similar conclusion. They are realizing that it’s time for the business world to do more about conservation.

We, two economists who have extensively researched natural resources and development, are proposing a new way to solve the problem of species and ecosystem loss. Corporations that benefit from biodiversity could forge what some are calling a “new deal for nature” by paying part of the tab for biodiversity conservation.

Biodiversity

Biodiversity, the variety of all natural ecosystems and species, is being lost at an unprecedented rate. According to the recent World Wildlife Fund Living Planet Report, the populations of mammals, birds, fish, reptiles and amphibians have fallen by an average of 60 percent in just over 40 years. The scientists Gerardo Ceballos, Paul R. Ehrlich and Rodolfo Dirzo have dubbed this decline and an impending wave of extinctions a “biological annihilation.”

We argue that many businesses are threatened by the loss of species and ecosystems, such as declining bee populations and dwindling stocks of fishforestswetlands and mangroves. Without an array of ecosystems and species, it’s tough for farmers to grow crops or ranchers to raise animals.

The pharmaceutical industry needs them to make and create drugs. For example, one team of U.S.-based researchers estimates that the pharmaceutical value of marine biodiversity for anti-cancer drug discovery could range from US$563 billion to as much as $5.7 trillion.

Insurance companies depend on coastal wetlands to minimize the impact of big storms. For example, an international group of researchers estimated that preserving one hectare of mangroves in the Philippinesyields more than $3,200 in flood-reduction benefits each year.

A global treaty, the Convention on Biological Diversity does set worldwide conservation targets. But we believe they may not be ambitious enoughCristiana Pașca Palmer, who serves as the UN’s biodiversity chief, is considering raising the treaty’s targets to conserve at least half of terrestrial, inland water, coastal, and marine habitats to preserve biodiversity.

But the existing efforts to preserve biodiversity are not only inadequate. They’re underfunded.

New way to pay

Global biodiversity protection requires $100 billion annually, according to a previous study one of us conducted, yet the international community spends up to $10 billion each year on biodiversity conservation.

Much of the world’s biodiversity is in developing countries, which lack the financial wherewithal to adequately conserve it.

As we have explained with our colleague Thomas J. Dean in Science magazine, we believe that involving businesses in an international environmental agreement could help bridge a chronic funding gap.

A key part of this new deal for nature would be making the corporations that depend on the health of natural ecosystems and species help foot the bill to preserve biodiversity.

Benefiting the bottom line

Why would corporations want to get involved?

First off, it may benefit their bottom line. Big companies depend on robust natural ecosystems systems and individual species.

We calculate that the increase in revenue and profits from biodiversity conservation could generate between $25 billion and $50 billion annually to fund global conservation efforts.

The seafood industry stands to gain $53 billion annually from an increase in marine stocks. This could generate $5 billion to $10 billion each year to spend on preserving biodiversity.

The insurance industry could see an additional $52 billion from increasing the area of protected coastal wetlands with a similar investment.

Agriculture also has an incentive to protect habitats of wild pollinators, who along with managed populations enhance global crop production by an amount a global group of scientists estimates to be worth between $235 billion to $577 billion annually.

What’s more, there is growing evidence that when corporations engage in environmental stewardship, they become more attractive investmentsand their borrowing costs decline.

Corporate social responsibility

There is a second reason why big companies are sometimes willing to take action and pay to conserve biodiversity: corporate social responsibility, an ethos that builds into business models a commitment to protect the environment and benefit society.

Danone is a leader in this regard. It established the first partnership agreement between a global environmental convention and a private company over 20 years ago.

Since then, the multinational corporation best known for its yogurt and bottled water has promoted and supported the sustainable use and management of wetlands.

Danone, for example, worked with local partners to replant mangroves in approximately 500 Senegalese villages. We believe this reforestation project shows that investments in nature can be sustainable and scalable business models.

Danone, which earned $3 billion in profits in 2017, has its own $80 million “Ecosystem Fund.” It’s just one of an increasing number of companies taking concrete steps toward biodiversity protection, even though they are not required by any law or national policy.

More than 21 national and regional initiatives have been established to encourage partnerships between business and biodiversity conservation. For example, 10 of the 13 biggest seafood companies that control up to 16 percent of global marine catch and 40 percent of the largest and most valuable fisheries have come together to support an ocean stewardship initiative.

Similarly, the International Council of Forest and Paper Associations, which represents the global forest products industry, now engages in sustainable forest management certification.

The total area of forests worldwide deemed to be subject to sustainable practices supplying the industry increased from 62 million hectares, 12 percent of the total global forest area, in 2000, to 310 million hectares in 2015, according to the industry group. That’s more than half of the total global forest area. The annual revenue of the world’s 100 largest global forest, paper and packaging companies is over $300 billion.

A new deal for nature

In addition to creating marine reserves, protecting forests, preserving the habitats of wild pollinators and conserving coastal wetlands, the private sector could also help finance conservation efforts in developing countries.

Based on our calculations, if the seafood sector were to set aside up to 20 percent of the increase in profits it gets from sustainably managing marine biomass stocks, it could conceivably spend up to $10 billion annually for marine biodiversity conservation.

And we estimate that by channeling up to 10 percent of the gains from sustainable forest management, the forest products industry could raise as much as $30 billion each year for investment in increasing protected forest area.

An agricultural sector contribution of around 10 percent of the benefits it derives from wild pollination services would amount to about $20 billion to $60 billion per year in additional financing for the conservation, creation and restoration of wild pollinator habitats.

All told, this business-world support could help close the $100 billion gap in global biodiversity conservation funding. This would go a long way toward slowing, and potentially reversing, biodiversity loss.

There are, of course, barriers to corporate conservation. The costs may be high. It may be hard for to businesses to assess the long-term value of biodiversity conservation benefits and integrate them into investment decisions. And it is possible that some of the corporations that take this step could be at a competitive disadvantage, especially in the short term.

But a number of companies are already showing that they believe investing in ecosystem preservation is worth it. In our view, corporate support for international biodiversity conservation is essential to prevent “biological annihilation.”

This article was originally published on The Conversation. Read the original article.

Four steps to cut transport emissions that can help fight climate change

The UN Intergovernmental Panel on Climate Change recently warned that global warming could reach 1.5℃ as early as 2030. The landmark report by leading scientists urged nations to do more to avert an impending crisis.

We have 12 years, the report said, to contain greenhouse gas emissions. This includes serious efforts to reduce transport emissions.

In Australia, transport is the third-largest source of greenhouse gases, accounting for around 17% of emissions. Passenger cars account for around half of our transport emissions.

The transport sector is also one of the strongest factors in emissions growth in Australia. Emissions from transport have increased nearly 60% since 1990 – more than any other sector. Australia is ranked 20th out of 25 of the largest energy-using countries for transport energy efficiency.

Cities around the world have many opportunities to reduce emissions. But this requires renewed thinking and real commitment to change.

Our planet can’t survive our old transport habits

Past (and still current) practices in urban and transport planning are fundamental causes of the transport problems we face today.

Over the past half-century, cities worldwide have grown rapidly, leading to urban sprawl. The result was high demand for motorised transport and, in turn, increased emissions.

The traffic gridlock on roads and motorways was the catalyst for most transport policy responses during that period. The solution prescribed for most cities was to build out of congestion by providing more infrastructure for private vehicles. Limited attention was given to managing travel demand or improving other modes of transport.

Equating mobility with building more roads nurtured a tendency towards increased motorisation, reinforcing an ever-increasing inclination to expand the road network. The result was a range of unintended adverse environmental, social and economic consequences. Most of these are rooted in the high priority given to private vehicles.

What are the opportunities to change?

The various strategies to move our cities in the right direction can be grouped into four broad categories: avoid, shift, share, and improve. Major policy, behaviour and technology changes are required to make these strategies work.

Avoid strategies aim to slow the growth of travel. They include initiatives to reduce trip lengths, such as high-density and mixed land use developments. Other options decrease private vehicle travel – for example, through car/ride sharing and congestion pricing. And teleworking and e-commerce help people avoid private car trips altogether.

Shanghai’s Hongqiao transport hub is a unique example of an integrated air, rail and mixed land use development. It combines Hongqiao’s airport, metro subway lines, and regional high-speed rail. A low-carbon residential and commercial precinct surrounds the hub.

Shift strategies encourage travellers to switch from private vehicles to public transport, walking and cycling. This includes improving bus routes and service frequency.

Pricing strategies that discourage private vehicles and encourage other modes of transport can also be effective. Policies that include incentivesthat make electric vehicles more affordable have been shown to encourage the shift.

Norway is an undisputed world leader in electric vehicle uptake. Nearly a third of all new cars sold in 2017 were a plug-in model. The electric vehicle market share was expected to be as much as 40% within a year.

Share strategies affect car ownership. New sharing economy businesses are already moving people, goods and services. Shared mobility, rather than car ownership, is providing city dwellers with a real alternative.

This trend is likely to continue and will pose significant challenges to car ownership models.

Uber claims that its carpooling service in Mumbai saved 936,000 litres of fuel and reduced greenhouse gas emissions by 2,662 metric tonnes within one year. It also reports that UberPool in London achieved a reduction of more than 1.1 million driving kilometres in just six months.

Improve strategies promote the use of technologies to optimise performance of transport modes and intelligent infrastructure. These include intelligent transport systems, urban information technologies and emerging solutions such as autonomous mobility.

Our research shows that sharing 80% of autonomous vehicles will reduce net emissions by up to 20%. The benefits increase with wider adoption of autonomous shared electric vehicles.

The urgency and benefits of steering our cities towards a path of low-carbon mobility are unmistakable. This was recognised in the past but progress has been slow. Today, the changing context for how we build future cities – smart, healthy and low-carbon – presents new opportunities.

If well planned and implemented, these four interventions will collectively achieve transport emission reduction targets. They will also improve access to the jobs and opportunities that are preconditions for sound economic development in cities around the world.

This article was originally published on The Conversation. Read the original article.

 

 

 

Five ways to ensure man-machine collaboration

For most people today, robots and smart systems are servants that work in the background, vacuuming carpets or turning lights on and off. Or they’re machines that have taken over repetitive human jobs from assembly-line workers and bank tellers. But the technologies are getting good enough that machines will be able work alongside people as teammates – much as human-dog teams handle tasks like hunting and bomb detection.

There are already some early examples of robots and people teaming up. For example, soldiers use drones for surveillance and ground robots for bomb disposal as they carry out military missions. But the U.S. Army envisions increased teaming of soldiers, robots and autonomous systemsin the next decade. Beyond the military, these human-robot teams will soon start working in fields as diverse as health care, agriculture, transportation, manufacturing and space exploration.

Researchers and companies are exploring lots of avenues for improving how robots and artificial intelligence systems work – and technical advances are important. As an applied cognitive scientist who has conducted research on human teaming in highly technical settings, I can say human-robot systems won’t be as good as they could be if the designers don’t understand how to engineer technologies that work most effectively with real people. A few basic concepts from the deep body of scholarly research into human teamwork can help develop and manage these new relationships.

1. Different jobs

Teams are necessarily groups of people with separate, though interdependent, roles and responsibilities. A surgical team, for instance, might include a nurse, a surgeon and an anesthesiologist. Similarly, members of a human-robot team should be assembled to take on different elements of a complex task.

Robots should do things they are best at, or that people don’t want to do – like lifting heavy items, testing chemicals and crunching data. That frees up people to do what they’re best at – like adapting to changing situations and coming up with creative solutions to problems.

A human-robot surgical team might have a human surgeon conducting laparoscopic or minimally invasive surgery with assistance from a robotmanipulator with cameras that is inserted into the patient and operated externally by the surgeon. The view can be augmented by overlaying medical imaging data on the patient’s internal anatomy on the camera view.

Planning for this sort of division of labor suggests people shouldn’t replicate themselves in machines. In fact, humanoid-shaped robots or robots and AI that mimic human social behavior may mislead their human teammates into having unrealistic expectations of what they can do.

2. Mutual backup

Effective teams’ members know that everyone has a different role – but are available to support each other when necessary. The disastrously fatal response to Hurricane Katrina in 2005 was partly the result of confusion and lack of coordination among government agencies and other groups like the Red Cross.

Teammates need to understand their own roles and those of the rest of the team, and how they fit together. They also need to be able to use this knowledge to avoid stepping on teammates’ toes, while anticipating others’ potential needs. Robots and artificial intelligence need to understand how their parts of the task relate to the parts their teammates are doing, and how they might be able to help as needed.

3. Common understanding

Effective teams share knowledge about the team goals and the current situation and this facilitates their interactions – even when direct communication is not possible.

The benefit of shared knowledge allows all sorts of collaborations and coordinations. For instance, when inflating a hot air balloon, the pilot is at one end, in the basket monitoring the burner. A crew member must be at the far end of the balloon, steadying it by holding a rope attached to its top. They can’t see or hear each other because the balloon blocks the view and the propane burner drowns out any other sound. But if they’re trained well, neither needs to communicate to know what the other is doing, and know what needs to happen next.

The connection team members have comes not only from information they all know, but shared knowledge developed through experience working together. Some scholars have suggested that robots can’t build experience and shared knowledge with humans, while other researchers are working on finding ways to actually do that. Machine learning will likely be a key factor in helping robots develop expectations of their coworkers’ behavior. Coupled with human intelligence, each side will learn about the other’s capabilities, limitations and idiosyncrasies.

4. Effective interaction and communication

Team members need to interact; effective teaming depends greatly on the quality of those interactions. In hospital teams for emergency resuscitation of patients, team interaction and communication are crucial. Those teams are often made up of whatever medical personnel are nearest to the patient, and members need to know right away what happened before the patient’s heart stopped – a life is at stake.

Yet even between people, communication isn’t always seamless. Between people and robots there are even more challenges – like making sure they share understandings of how words are used or what appropriate responses are to questions. Artificial intelligence researchers are making great strides in advancing computers’ ability to understand, and even produce, natural language – as many people experience with their smart assistant devices like Amazon’s Alexa and Google Home, and mobile and car-based GPS directions systems.

It’s not even clear if typical human communication is the best model for human-robot teams. Human-dog teams do fine without the use of natural language. Navy SEALs can work together at highly effective levels without uttering a wordBees communicate location of resources with a dance. Communication does not have to involve words; it could include sound signals and visual cues. If a robot was tending the patient when their heart stopped, it could indicate what happened on a monitor that all resuscitation team members could see.

5. Mutual trust

Interpersonal trust is important in human teams. If trust breaks down among a team of firefighters, they’ll be less effective and may cost lives – each other’s or members of the public they’re trying to help. The best robot teammates will be trustworthy and reliable – and any breaches in reliability need to be explained.

But even with an explanation, technology that is chronically unreliable is likely to be rejected by human teammates. That’s even more vital in safety-critical technology, like autonomous vehicles.

Robots are not automatically capable of teaming with humans. They need to be assigned effective roles on the team, understand other team roles, train with human team members to develop common understanding, develop an effective way to communicate with humans, and be reliable and trustworthy. Most importantly, humans should not be asked to adapt to their nonhuman teammates. Rather, developers should design and create technology to serve as a good team player alongside people.

This article was originally published on The Conversation. Read the original article.

Have we really reached ‘peak car’?

General Motors has announced it’s shuttering five production facilities and killing six vehicle platforms by the end of 2019 as it reallocates resources towards self-driving technologies and electric vehicles.

The announcements should come as a surprise to no one, as they echo a similar announcement made by Ford earlier this year that it will exit all car production other than Mustang within two years.

Why the sudden attitude adjustment toward cars? Well, both firms cite a focus on trucks, SUVs and crossovers. OK, sure — that’s what more people are buying when they buy a vehicle today. But there is a broader and more long-term element to this discussion.

Have we reached Peak Car?

Many may remember the dialogue associated with Peak Oil, or the idea that we had reached or would soon reach the peak production levels of oil around the globe.

Such forecasts and predictions were likely related to price run-ups on commodity and investment strategies in the oil industry. However, new exploration discoveries and extraction technologies ultimately mean we are a long way from running out of oil. While we may still hit peak production in the near future, it is more likely due to a decreasing need as society moves to alternative energy sources.

But what about cars? North American car production hit 17.5 million vehicles in 2016, and dropped marginally to 17.2 million in 2017. Interesting, but perhaps not significant.

More telling are changes in driver behaviour. In North America, for example, fewer teens are getting driver’s licences. In 1983, 92 per cent of teens were licensed, while by 2014, that number had dropped to 77 per cent. In Germany, the number of new licences issued to drivers aged 17 to 25 has dropped by 300,000 over the last 10 years.

The future is driverless

Factor in ride-sharing services like Uber and Lyft, the comprehensive cost of vehicle ownership and more effective public transportation (everywhere but Canada) and we get a sense of some of the reasons for these evolving automotive strategies.

Most significant, however, is the evolution of self-driving technology. Picture this scenario:

Julie is an ER doctor at the local hospital, on the 7 a.m. to 3 p.m. shift. She jumps in the family car at 6:30 a.m. and is at the hospital by 6:50 a.m.

After dropping Julie off, the car then heads home, arriving in time to take Julie’s two children to their high school; one of them tosses their hockey equipment in the back of the car. The car then returns home to take Julie’s husband to the law office where he starts work at 9 a.m.

The car then swings by the school to take Julie’s daughter to hockey practice at 2:30 p.m., and then returns to the hospital to pick Julie up. And so on.

The technology to support the scenario above exists now, and will result in reduced car ownership through a more economical and efficient approach to managing cars, whether accessed through independent household ownership or fleet membership.

As it is today, a family like Julie’s would need two or possibly three vehicles, and those vehicles would largely sit still most of the day. Tomorrow, the family could be down to one vehicle, possibly an SUV for the hockey gear. What happens when families or groups of people further pool their assets for more ride-sharing or increased capacity?

Fewer cars on the road within a decade

We are moving from a do-it-yourself (DIY) transportation economy to a sharing or do-it-for-me (DIFM) economy. Many of us won’t like it — I honestly like to drive — but the numbers and the technology are there.

As safety technologies improve and societal paradigms shift, this evolution will gather momentum. Based on the young driver statistics above, it seems reasonable to anticipate a reduction in cars per capita of 20 to 30 per cent in the next decade.

Unions at GM and Ford are justifiably unhappy, but they shouldn’t be surprised. It is quite possible that we have reached Peak Car in North America and Europe.

Companies that want to succeed in this new environment will need to be different, and especially better in some way. If car volumes drop by 30 per cent over the next 10 years, there better be something special about the car company that hopes to survive, let alone prosper — like better technology, better comfort or better service.

If current trends continue, we can anticipate more shutdown announcements — like GM’s — from car companies and parts suppliers, as there won’t be room for all of them.

This article was originally published on The Conversation. Read the original article.

Birth of first gene-edited babies broadens the ethical debate about human gene-editing

The media is buzzing with the surprise news that a Chinese researcher, Jainkui He, has created the world’s first genome-edited twins. He did this, ostensibly, to provide resistance to HIV, the virus that causes AIDS.

Prof. He, reportedly working with former Rice University supervisor Michael Deem, capitalized on work in 2012 by Jennifer Doudna and Emmanuel Charpentier, who introduced a new and easier way of altering the DNA of human and non-human organisms using CRISPR-Cas9 technology. He also built upon the work of molecluar biologist Feng Zhang, who optimized this genome editing system for use in human cells.

He’s claim moves human germline genome editing from the lab to the delivery room — something other scientists might have been thinking about despite ethical concerns.

The scientific community has expressed widespread condemnation of He’s decision to initiate a pregnancy using genetically modified embryos — as “dangerous, “irresponsible” and “crazy.” What if mistakes are made? How can we be sure this powerful technology will benefit humankind? Are we ready for the consequences of genetically engineering our own evolution?

We argue that we cannot allow individual scientists to decide the fate of the human genome. Heritable human genome editing poses a significant existential threat because changes may persist throughout the human population for generations, with unknown risks.

We must commit to inclusive global dialogue — involving experts and the public — to develop broad societal consensus on what to do with genetic technologies.

Possible mutations or forced sterilization

He announced to the world that he edited the genome of human embryos for seven couples using CRISPR-Cas9 technology. According to He, two of these embryos resulted in a pregnancy, and twin girls (Lulu and Nana, which are pseudonyms) were born.

The goal of the editing was to confer resistance to HIV by modifying the CCR5 gene (the protein doorway by which HIV enters human cells). He claims that these edits have been verified in both twins and this data has been looked over and called “probably accurate” by George Church, a world-renowned Harvard geneticist.

Evidence suggests, however, the procedure was unnecessary, is unlikely to provide benefit and could even cause harm. Although the father of Lulu and Nana was HIV positive, it is unlikely that he would have passed this disease to his children using standard IVF procedures.

The children born of genome editing are genetic mosaics with uncertain resistance to HIV and perhaps decreased resistance to viral diseases like influenza and West Nile. This is because the CCR5 gene that He disabled plays an important role in resistance to these diseases.

As well, there is the possibility of unintended mutations caused by the CRISPR procedure. These health risks cannot be overstated, as the repercussions for these twin girls, in terms of their susceptibility to infectious diseases or cancer will likely be a cause for concern throughout their lives.

Another uncertain consequence for the twins concerns their reproductive health and freedom. As they approach reproductive age will they face the possibility of “forced” sterilization to prevent their edited genes being passed on to future generations?

Multiple investigations

The Southern University of Science and Technology in Shenzhen, China, where He is employed (currently on leave from February 2018 to January 2021), has distanced itself from the researcher and will form an independent international committee to investigate the widely publicized, controversial research.

Rice University, where Michael Deem is employed, has also said they will investigate.

The Shenzhen HarMoniCare Women’s and Children’s Hospital launched an inquiry into the validity of the ethics documents provided by He documenting research ethics approval.

Importantly, the ethics approval was only uploaded to the Chinese Clinical Trial Database on Nov. 8 as a retrospective registration — likely around the time that the twins were purportedly born.

Designer babies by powerful elites

With the Genetic Genie out of the bottle, we have to ask whether we need any more time to reflect on the ethics?

A just and fair society is one with less disparity and more justice. A predictable consequence of allowing (nay, encouraging) individuals to genetically modify their children will be greater disparity and greater injustice — and not only because of limited access to genome editing technology.

Of significant concern is the inevitable increase in discrimination, stigmatization and marginalization as powerful scientific and corporate elites decide which traits are desirable and which traits are not.

Although He disavows any interest in so-called “designer babies” whose parents have chosen their children’s eye-colour, hair-colour, IQ and so on, we are forced to contemplate such a “eugenic” dystopian future should we continue down this path.

The human genome belongs to all of us. As such, we need to commit to the hard work of making good on the 2015 admonition by the Organizing Committee for the International Summit on Human Gene Editing to work towards “broad societal consensus” on how we should proceed with, or not proceed with, editing it.

In this regard it is heartwarming to have Feng Zhang call for a moratorium on implantation of edited embryos and remind his scientific colleagues that “in 2015, the international research community said it would be irresponsible to proceed with any germline editing without ‘broad societal consensus about the appropriateness of the proposed application.’”

This article was originally published on The Conversation. Read the original article.

Mobility as a Service (MaaS) can make getting around easier if few things change

Mobility as a Service (MaaS) represents a new way of thinking about transport. It has the potential to be the most significant innovation in transport since the advent of the automobile.

In a move away from dependence on privately owned cars or multiple transport apps, MaaS combines mobility services from public transport, taxis, car rental and car/bicycle sharing under a single platform that’s accessible from a smart phone. Not only will a MaaS platform plan your journey, it will also allow you to buy tickets from a range of service providers.

While autonomous vehicles have garnered much of the recent media attention on transport, MaaS is gaining ground. A Google search now returns more than 400,000 hits on “mobility as a service”. Many private and public transport providers, along with many state governments, are looking at the impacts of MaaS and how they can capitalise on the idea.

Why the growing interest in MaaS?

In part, the motivation is due to changing demographics. The world continues to urbanise with 55% of the global population living in urban areas today. By 2050, projections suggest that will increase to 68%. This increasing urbanisation will add to existing problems of traffic congestion.

A growing body of evidence suggests that providing more infrastructure won’t solve the problem. It’s too costly and this type of “solution” will provide only temporary relief. MaaS has been promoted as a better way to manage traffic congestion by making more efficient use of existing private and public transport infrastructure.

And MaaS has many other appealing aspects. It could shorten commuting times and make travelling more convenient. It could help shift commuter trips from peak times to low demand times (through demand-responsive pricing of the services).

Finally, MaaS could improve air quality by shifting travellers from cars to more sustainable modes, such as public and active transport, through reward systems. For example, in a trial in Gothenburg, customers were rewarded with points for every ton of CO2 emissions they avoided by using more sustainable travel modes. The points were redeemable for a range of goods and services.

The other motivating factor is the estimated value of the MaaS market. Projections suggest a market worth $US600 billion in the United States, European Union and China by 2025. Others have projected that the global market for MaaS will exceed $US1 trillion by 2030.

Lessons from early trials

UbiGo first trialled MaaS in Gothenburg, Sweden, for six months between November 2013 and April 2014. This involved 83 subscriptions by 195 people.

Most of the customers (80%) wanted to continue after the trial ended. Based on an evaluation of the Gothenburg trial, the following were important considerations for MaaS:

  • competitive cost relative to owning a car
  • flexibility and convenience
  • sufficient mobility infrastructure to reach most potential users
  • ease of use.

The first commercial application of the concept was by MaaS Global in Helsinki, Finland. The Whim app was launched in 2016. It covers public transport, taxis, car rentals, car-share and bike-share modes. Customers can use the service on a pay-as-you-go plan or by monthly subscription.

Governance the key to scaling up MaaS

MaaS Global is looking to expand elsewhere soon. The key question is whether it can work beyond Helsinki. The challenge is not about the technology — it is about governance.

It’s no coincidence that Whim was born in Finland, a small country with well-functioning institutions and well-designed cities. MaaS will continue to be successful here, in part due to support from the national government.

For example, in 2018 Finland was the first country in the world to create an open market for mobility services. As of January 2018, all mobility partners must provide open data and associated computer programs (APIs) to third parties.

By contrast, the Gothenburg trial, while successful, has not yet resulted in regular services. Based on learnings from the trial, UbiGo has refined their business model to better integrate public and commercial transport services and will be relaunching in Stockholm with a trial followed by a full roll-out by the end of 2018.

MaaS requires a willingness by private and public transport providers to work with the creators of MaaS platforms. Transport providers must agree to allow the MaaS operator to sell their services and collect a “reasonable” and “fair” commission for each ticket sold.

Another challenge is getting private operators to participate despite losing customers in the short term.

Advocates suggest that, if the concept is successful, the pool of customers will grow as cars are abandoned in favour of MaaS. Hence most companies will want to participate in one or more MaaS platforms.

However, for its potential to be realised, MaaS needs governments to ensure a playing field that is fair for existing and new mobility service providers, and one that encourages cooperation rather than competition. It may be the case that the most efficient MaaS platforms will take the form of regulated monopolies, much like existing utility companies.

This article was originally published on The Conversation. Read the original article.

A/B testing: how offline businesses can boost profits

The market testing that helped give us the Google search we know today is being emulated by industries from hospitality to manufacturing to help better focus their products and services and meet customer needs. So what did Google do?

If you travel back through internet time via the Internet Archive, you can see what Google looked like soon after it first launched, more than 20 years ago.

While the logo is familiar, the look and feel of the website used to be quite different to what it is now. How did Google evolve into the faster-to-load, nicer-to-see, easier-to-read, pages and apps that we use today?

A senior Google employee told me that the search engine kept ahead of the competition via a process of rigorous prototype testing. At the time we spoke, prototypes were tested “offline” by measuring the reactions of hired test subjects to particular features and designs. But soon testing moved “online” and we all became the subjects of A/B tests.

What is A/B testing?

An A/B test is when a company gives a user access to one of two versions of a website or app:

A) the current version

B) the prototype.

The way users interact with the product is measured during testing. Subtle differences in these interactions can illustrate which version is more effective, according to particular criteria. If the prototype is proven superior, it replaces the existing version as the default product.

Google engineers ran their first A/B test in 2000 to determine the optimum number of search results that should be shown per page.

Statistics decide, not managers

Websites and apps have become a constellation of comparisons that collectively evolve systems to an improved state. Every change to an interface or alteration to an algorithm is A/B tested.

Web companies run an astonishing number of tests. In a talk, Microsoft stated that the Bing search engine runs over 1,000 a month. So many, in fact, that every time we access an internet site or app, we are likely unwitting subjects of an A/B test. We are rarely aware of the tests because the variations are often subtle.

Companies are able to run so many tests that they have moved to a process known as hill climbing: taking small steps, getting gradually better. This approach has been so successful that it drives the way many companies innovate today.

Teams are charged with the goal of increasing the user measures. If a small tweak tanks, it’s dropped. If it triumphs, it’s launched. The decisions are made by statistics, not managers.

Indeed, advocates of A/B testing stress the importance of ignoring the views of managers, which they call HiPPOs – the Highest Paid Person’s Opinions. This acronym was coined from tales such as that of Greg Linden, an early Amazon employee. Linden suggested that, just as supermarkets put magazines and snacks by the checkout queue, Amazon should adopt the same approach with its online shopping carts.

He recalls that a “senior vice president was dead set against” the idea, fearing it would discourage people from checking out.

Linden ignored the HiPPO and ran an A/B test. The results showed that Amazon would make more money and not lose customers, so Linden’s idea was launched. A/B tests have proved to be more accurate, faster and less biased than any HiPPO.

A/B testing can’t solve everything

The complicated part of A/B testing is figuring out how to measure users in a way that will yield the insights you need. Tests need to be carefully designed, and continually reviewed.

Do it wrong and you could end up with success in the short-term, but failure in the long run. A news site that promotes celebrity tidbits might get the immediate gratification of clicks, but lose loyal readers over time.

There are also limits to what A/B testing can observe. The testing relies on measuring user input, mouse clicks, typing, spoken commands, or taps on a mobile screen. Spotify recently asked if someone has a playlist on in the background and they aren’t interacting with their phone, how can Spotify measure if the user is satisfied? No one currently has an answer.

Taking A/B testing offline

Despite these risks and limitations, the success of A/B testing pervades all companies with an internet presence. And now this testing is being trialled in the physical world.

A couple of years ago, I met with a company that prints and sends utility bills to customers. They A/B tested different formats of the bill, learning which formats improved the rates of customers paying on time.

Restaurants and bars are reportedly using data from sensors to learn which restaurant layout encourages the most sales. For example, if an intimate seating arrangement in the back of a bar attracts people to stay longer, customers in that space are likely to spend more on drinks.

A/B testing could even extend to manufacturing. Slightly different versions of a product could be made on flexible production lines. Production could then be altered if one version of the product was found to sell better than another.

It’s not always a smooth ride, but the power of A/B testing is here to stay.

This article was originally published on The Conversation. Read the original article.

Taking antibiotics for cold and flu is a bad idea

Winter is well and truly on its way. For many, this conjures up images of log fires, mistletoe and festive feasts. But it can also mean cold, damp mornings, short hours of daylight and the dreaded cold and flu season.

Tickly throats, headaches, fevers and generally feeling rotten are the warning signs that many of us fear. Pressures of work and personal commitments often lead people to seek a quick fix from their GP or other healthcare professional. This usually takes the form of antibiotics.

Evidence suggests the use of antibiotics is on the increase, which is a cause for concern as the overuse of antibiotics has been linked to antimicrobial resistance. This is the ability of microorganisms – such as bacteria and viruses – to evolve so that antimicrobials (antibiotics and antivirals) become less effective at killing or working against them.

Antibiotic resistance results in standard treatments – such as many of the commonly prescribed antibiotics – becoming ineffective. And this leaves people who need antibiotics for serious infections vulnerable.

This issue has been recognised as a problem on a global scale in a UK government commissioned review. These findings led to the National Institute of Clinical Excellence (NICE) publishing quality standards to help clinicians when prescribing antibiotics to slow the rise in antimicrobial resistance.

Antibiotic expectations

The Cochrane review, on which I worked, found that many vulnerable patients have an increased risk of developing microbial resistance. This includes people with chronic respiratory illness – many of whom have “rescue packs” which include antibiotics at home. These repeat prescriptions are often issued without enough education to support their use or highlight their drawbacks – so unnecessary prescribing practices continue.

Beliefs and expectations by patients, healthcare professionals and society have been found to be the main drivers of the overuse of antibiotics. From a patient’s perspective, the desire to get better is often more important than any external considerations such as publicity campaigns. And for healthcare professionals, the greater good of society occurs outside the immediate consultation and is therefore often overlooked – along with existing evidence. This breeds a cycle of expectation and self-interest which serves both clinician and patient but neglects wider societal issues.

It is possible, then, that much antibiotic prescribing, particularly in the flu season, is driven by these expectations – from both patients and healthcare professionals. But this is not unique to antibiotic prescribing. Our previous research found similar behaviours with oxygen therapy. Despite emerging evidence and guidelines, poor prescribing and administration of oxygen therapy persists – and it is often given routinely for breathlessness to patients.

A medical priority

A UK parliamentary health and social care committee report on antimicrobial resistance has called for the issue to be regarded as “top five policy priority” for government – stressing the need to support the pharmaceutical industry to develop new antibiotics.

How Brexit will affect this investment and commitment is unclear. But there remains an urgent need to promote responsible and appropriate prescribing through education, research, guidelines and campaigns.

Current UK prescribing levels are reported as double that of other countries such as Sweden, Netherlands and the Baltic States. This presents a challenge for primary care and hospitals who need to reduce both the number of antibiotics prescribed and the length of time that they are administered.

Antibiotic efficacy

A recent government report has called for the use of rapid diagnostic testing to inform all antibiotic prescriptions. This approach should take the guesswork out of prescribing antibiotics by testing for blood markers that signify the presence of infection. Findings from a large trial based in the UK are expected soon.

Sometimes though, the prescribing of general use antibiotics is not only expected, but cheaper and easier. So it will require a concerted effort to promote responsible prescribing and educate all healthcare professionals, patients and the public to refrain from using antibiotics.

So as winter approaches, rather than rushing out to your doctors at the first sign of a sniffle, try and ride it out. Get lots of sleep, keep stress to a minimum and up your fluid intake – all of which have been shown to help in the treatment and staving off of colds and flu. It’s also worth being extra vigilant with hand washing to help keep those germs at bay and stop them from developing into something more nasty in the first place.

This article was originally published on The Conversation. Read the original article.

Recent Posts