Saturday, February 23, 2019
Home Blog

Disruption gets less likely with bigger teams, study finds

By analyzing data on the work of more than 50 million teams in science and technology, researchers discovered that larger teams developed recent, popular ideas, while small teams disrupted the system by drawing on older and less prevalent ideas | Aspioneer
Group of five people doing high fives

Conventional wisdom suggests that large teams are better at solving complex problems. And, in this era of specialization, many discoveries are the work of large groups of experts from a diversity of fields. But it’s not yet time to abandon support for small teams of scientists, according to a new study in Nature.

By analyzing data on the work of more than 50 million teams in science and technology, researchers discovered that larger teams developed recent, popular ideas, while small teams disrupted the system by drawing on older and less prevalent ideas.

Disruptive research tends to introduce new approaches and ask fundamental questions, while “developmental” research is more likely to adjust or test old theories and apply them in new contexts.

1,000 authors

Examples of very large projects include the Human Genome Project, and more recently, the projects to detect gravitational waves and the Higgs boson particle.

The detection of gravitational waves — a discovery that was published in a paper with more than 1,000 authors and which received the 2017 Nobel Prize in physics — “could possibly have been the most conservative experiment in history,” says James Evans, the senior author of the new Nature paper and a sociologist at the University of Chicago. “It tested a 100-year old hypothesis and that hypothesis was generated by one person, Albert Einstein.”

Solo scientists like Einstein, or small teams, appear to come up with novel ideas that change the course of a field. Those are becoming rarer, though: authorship lists on scientific papers have grown in the last century, from about one author per paper in 1913 to an average of 5.4 authors per paper in 2013.

The impact of this shift in team sizes isn’t completely known. This new study documents the different roles that small and large scientific teams play in the research landscape, but it raises more questions than it answers.

Different sizes, different approaches

These are questions that would help funding agencies to make better decisions.

“There’s this long, long debate about this,” says John Walsh, who studies science, technology and innovation using a sociological perspective. “Is giving a lot of money to one (large) project a good way of moving the science forward, or is it better to give lots of people more modest funds and have them work on different things?”

In small teams, people are more apt to take chances because the cost of taking a chance is lower. There are fewer monetary resources invested and fewer careers at stake, Walsh explains.

What’s interesting, adds Walsh, is that the effects described by Evans and his co-authors Lingfei Wu and Dashun Wang show up even at modest team sizes — between one and 10 people.

One way of interpreting the finding is that small teams have a better chance of finding something unusual because they can be nimble and adapt to new findings by changing direction and pursuing new paths as they open up. It’s unclear, though, whether small teams propose more innovative and disruptive ideas to begin with, or whether they are more likely to change course midstream and “benefit from serendipity,” he says.

In contrast, large teams are more like huge shipping barges — impossible to turn on a dime. They are also faced with all sorts of conservative pressures.

“We realized just from our own experience that creating these big federations of people ends up really stifling certain kinds of ideas and certainly stifling the likelihood of following an interesting or unusual path,” says Evans. You have to get to the common denominator to build consensus, and “the common denominator, when you have a lot of people, is yesterday’s hits.”

Indeed, Evans’s study shows that large teams are more likely to cite the really famous older papers, whereas smaller groups are likely to cite a broader array of papers and to resurrect some more obscure findings from prior literature.

Because of the burden of co-ordination in large teams, “it’s much more likely that a small group of committed people can hammer on a problem and come up with a breakthrough or disruptive solution than a really large group, where they’re not going to be able to to really coordinate,” says Steve Kozlowski, a professor of organizational psychology.

The co-ordination challenges increase when the large group is interdisciplinary because scientists have different sets of assumptions about the way the world works based on their disciplinary training.

Size matters?

Mirta Galesic, professor of Human Social Dynamics at the Santa Fe Institute, thinks that the small and large teams may represent different stages of the natural history of an idea.

Initially, a disruptive or unconventional idea is born small and only has a few people working on it. But if it stands up to initial investigation and scrutiny, it may attract more funding and more scientists to work on it. In other words, the disruptive work of small teams represents the seeds from which big projects grow.

“I think it’s possible that the small and large teams occur at different stages of the scientific process and that it could be a case that the size is correlated with the process, rather than the cause of a disruption,” says Galesic.

Implications for funding agencies

“What’s the secret sauce that the small teams seem to have?” Kozlowski asks. He’d like to see funding agencies invest more resources in studying team science: “If we’re going to be pushing for these large investments to tackle big problems, then we want to have research to help inform how these larger teams should be set up and managed.”

Some agencies are still supporting smaller teams. For example, studies by the National Institute of General Medical Sciences (NIGMS) a few years ago showed diminishing marginal returns on grant funding. The NIGMS researchers argued that this provided a rationale for spreading the research money across more labs.

Given the distinct roles small and large teams play in moving science forward, “our findings suggest the importance of supporting both small and large teams for the sustainable vitality of science and technology,” the authors write in the paper.

In particular, small teams are often neglected, and so the study is “an encouragement for funders to realize that if they want to fund disruptive innovations, they’re going to have to take more risks, and smaller teams are one important dimension of risk that we’re suggesting they should consider,” Evans says.

The current emphasis on funding large teams, he says, has a long term unintended consequence: “science as a whole ends up looking more conservative today, and I think it’s starving future innovations.”

This article was originally published on The Conversation. Read the original article.

Consumers don’t realize how their personal data is misused

Every device that you use, every company you do business with, every online account you create or loyalty program you join, and even the government itself collects data about you | Aspioneer
A woman using her phone

Sixty-seven percent of smartphone users rely on Google Maps to help them get to where they are going quickly and efficiently.

A major of feature of Google Maps is its ability to predict how long different navigation routes will take. That’s possible because the mobile phone of each person using Google Maps sends data about its location and speed back to Google’s servers, where it is analyzed to generate new data about traffic conditions.

Information like this is useful for navigation. But the exact same data that is used to predict traffic patterns can also be used to predict other kinds of information – information people might not be comfortable with revealing.

For example, data about a mobile phone’s past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

These predictions label who you are as a person and guess what you’re likely to do in the future. Research shows that people are largely unaware that these predictions are possible, and, if they do become aware of it, don’t like it. In my view, as someone who studies how predictive algorithms affect people’s privacy, that is a major problem for digital privacy in the U.S.

How is this all possible?

Every device that you use, every company you do business with, every online account you create or loyalty program you join, and even the government itself collects data about you.

The kinds of data they collect include things like your name, address, age, Social Security or driver’s license number, purchase transaction history, web browsing activity, voter registration information, whether you have children living with you or speak a foreign language, the photos you have posted to social media, the listing price of your home, whether you’ve recently had a life event like getting married, your credit score, what kind of car you drive, how much you spend on groceries, how much credit card debt you have and the location history from your mobile phone.

It doesn’t matter if these datasets were collected separately by different sources and don’t contain your name. It’s still easy to match them up according to other information about you that they contain.

For example, there are identifiers in public records databases, like your name and home address, that can be matched up with GPS location data from an app on your mobile phone. This allows a third party to link your home address with the location where you spend most of your evening and nighttime hours – presumably where you live. This means the app developer and its partners have access to your name, even if you didn’t directly give it to them.

In the U.S., the companies and platforms you interact with own the data they collect about you. This means they can legally sell this information to data brokers.

Data brokers are companies that are in the business of buying and selling datasets from a wide range of sources, including location data from many mobile phone carriers. Data brokers combine data to create detailed profiles of individual people, which they sell to other companies.

Combined datasets like this can be used to predict what you’ll want to buy in order to target ads. For example, a company that has purchased data about you can do things like connect your social media accounts and web browsing history with the route you take when you’re running errands and your purchase history at your local grocery store.

Employers use large datasets and predictive algorithms to make decisions about who to interview for jobs and predict who might quit. Police departments make lists of people who may be more likely to commit violent crimes. FICO, the same company that calculates credit scores, also calculates a “medication adherence score” that predicts who will stop taking their prescription medications.

How aware are people about this?

Even though people may be aware that their mobile phones have GPS and that their name and address are in a public records database somewhere, it’s far less likely that they realize how their data can be combined to make new predictions. That’s because privacy policies typically only include vague language about how data that’s collected will be used.

In a January survey, the Pew Internet and American Life project asked adult Facebook users in the U.S. about the predictions that Facebook makes about their personal traits, based on data collected by the platform and its partners. For example, Facebook assigns a “multicultural affinity” category to some users, guessing how similar they are to people from different race or ethnic backgrounds. This information is used to target ads.

The survey found that 74 percent of people did not know about these predictions. About half said they are not comfortable with Facebook predicting information like this.

In my research, I’ve found that people are only aware of predictions that are shown to them in an app’s user interface, and that makes sense given the reason they decided to use the app. For example, a 2017 study of fitness tracker users showed that people are aware that their tracker device collects their GPS location when they are exercising. But this doesn’t translate into awareness that the activity tracker company can predict where they live.

In another study, I found that Google Search users know that Google collects data about their search history, and Facebook users are aware that Facebook knows who their friends are. But people don’t know that their Facebook “likes” can be used to accurately predict their political party affiliation or sexual orientation.

What can be done about this?

Today’s internet largely relies on people managing their own digital privacy.

Companies ask people up front to consent to systems that collect data and make predictions about them. This approach would work well for managing privacy, if people refused to use services that have privacy policies they don’t like, and if companies wouldn’t violate their own privacy policies.

But research shows that nobody reads or understands those privacy policies. And, even when companies face consequences for breaking their privacy promises, it doesn’t stop them from doing it again.

Requiring users to consent without understanding how their data will be used also allows companies to shift the blame onto the user. If a user starts to feel like their data is being used in a way that they’re not actually comfortable with, they don’t have room to complain, because they consented, right?

In my view, there is no realistic way for users to be aware of the kinds of predictions that are possible. People naturally expect companies to use their data only in ways that are related to the reasons they had for interacting with the company or app in the first place. But companies usually aren’t legally required to restrict the ways they use people’s data to only things that users would expect.

One exception is Germany, where the Federal Cartel Office ruled on Feb. 7 that Facebook must specifically ask its users for permission to combine data collected about them on Facebook with data collected from third parties. The ruling also states that if people do not give their permission for this, they should still be able to use Facebook.

I believe that the U.S. needs stronger privacy-related regulation, so that companies will be more transparent and accountable to users about not just the data they collect, but also the kinds of predictions they’re generating by combining data from multiple sources.

This article was originally published on The Conversation. Read the original article.

 

Researchers develop a new DNA sequencing technique to analyse cancer cell’s life history

Previously, when researchers studied cancer, they used to take a piece of the tumour and analyse it as a whole without understanding the life history of each tumour | Aspioneer
human cell

A new DNA sequencing technique lets scientists track genetic errors in individual cancer cells. For the first time, they can reconstruct a tumour’s life history and understand how an error in a cell’s DNA led to the uncontrollable growth of a tumour. This new technology will help doctors understand how a particular cancer has evolved and personalise treatments for each patient, to make them more effective and successful.

We are made of billions of cells that work together to build every part of our body. Occasionally, one of these cells acquires an error in its genetic code and this error, or mutation, can sometimes make this abnormal single cell divide and grow faster than the healthy ones, causing a tumour to develop. During this process, cells can continue to evolve and accumulate many more mutations that make it more dangerous than the original one.

Previously, when researchers studied cancer, they used to take a piece of the tumour and analyse it as a whole. Without understanding the life history of each tumour, science could only give us an incomplete picture of the cancer, where the different cells are mixed and averaged, to get an idea of how dangerous the cancer was. But it didn’t tell us anything about how the tumour had evolved and what type of cells it was made of, making it hard for doctors to select the right treatment for each patient.

This is the reason many cancer treatments don’t work, and when they do, cancer sometimes regrows within a few months or years, coming back a lot more aggressive than the previous one and much more difficult to treat.

Seeing the whole picture

As the entire tumour couldn’t be beaten as a whole, five years ago, researchers started using a different strategy: divide and conquer. They began dividing the tumours into single cells and analysing each cancer cell separately to try and understand which types of cells made up each tumour.

But even with this advance, they still only had two main tools to analyse single cancer cells. One tool allowed them to read the genetic code of a single cell at a time, identifying which cells have genetic mutations. The other tool helped them understand which genes were active in each cancerous cell, and what their role in the cells was. However, neither of these tools revealed the whole picture. Using them, you could either get the genetic errors from each cell, or the genes that are active and functional – but not both. This made it impossible to understand which genes are activated as a result of genetic errors in each cell.

A team of researchers at the MRC Weatherall Institute of Molecular Medicine, University of Oxford, developed a new single-cell sequencing technique that allowed them to see the whole picture. It lets scientists analyse the genetic errors that each cell in a tumour has accumulated while also understanding its gene activity and cell function. This will allow researchers to see in fine detail every aspect of the tumour.

In their latest study, published in Molecular Cell, they used this new technique, called TARGET-seq, to analyse many thousands of cells from 11 patients whose blood-making cells had become cancerous. Their analysis provided a detailed picture of the cell types that made up blood cancers. Thanks to its high resolution, they could reconstruct the complete life history of each tumour and identify the molecules that were active during the first steps of tumour development. They also found that cells that appeared healthy, as they didn’t have cancerous mutations, were behaving like malignant cells and activating abnormal genes because they were in a tumour environment.

Scientists are now using TARGET-seq to analyse different types of aggressive leukaemias for which there are no effective treatments. They are hoping to understand how to eliminate the cells that started and sustained the tumour, to be able to completely eradicate them. In the future, we hope that this technique will be used by oncologists to determine the exact mixture of cancer cells that makes up each tumour and customise the right treatment for each patient.

This article was originally published on The Conversation. Read the original article.

How do hackers get into your digital devices and what can keep you safe

At its core, a link is just a mechanism for data to be delivered to your device. Code can be built into a website which redirects you to another site and downloads malware to your device en route to your actual destination | Aspioneer
A keyboard, a coffee mug on a table and a hand clicking the mouse

Every day, often multiple times a day, you are invited to click on links sent to you by brands, politicians, friends and strangers. You download apps on your devices. Maybe you use QR codes.

Most of these activities are secure because they come from sources that can be trusted. But sometimes criminals impersonate trustworthy sources to get you to click on a link (or download an app) that contains malware.

At its core, a link is just a mechanism for data to be delivered to your device. Code can be built into a website which redirects you to another site and downloads malware to your device en route to your actual destination.

When you click on unverified links or download suspicious apps you increase the risk of exposure to malware. Here’s what could happen if you do – and how you can minimise your risk.

What is malware?

Malware is defined as malicious code that: will have adverse impact on the confidentiality, integrity, or availability of an information system.

In the past, malware described malicious code that took the form of viruses, worms or Trojan horses.

Viruses embedded themselves in genuine programs and relied on these programs to propagate. Worms were generally stand alone programs that could install themselves using a network, USB or email program to infect other computers.

Trojan horses took their name from the gift to the Greeks during the Trojan war in Homer’s Odyssey. Much like the wooden horse, a Trojan Horse looks like a normal file until some predetermined action causes the code to execute.

Today’s generation of attacker tools are far more sophisticated, and are often a blend of these techniques.

These so-called “blended attacks” rely heavily on social engineering – the ability to manipulate someone to doing something they wouldn’t normally do – and are often categorised by what they ultimately will do to your systems.

What does malware do?

Today’s malware comes in easy to use, customised toolkits distributed on the dark web or by well meaning security researchers attempting to fix problems.

With a click of a button, attackers can use these toolkits to send phishing emails and spam SMS messages to eploy various types of malware. Here are some of them.

  • a remote administration tool (RAT) can be used to access a computer’s camera, microphone and install other types of malware
  • keyloggers can be used to monitor for passwords, credit card details and email addresses
  • ransomware is used to encrypt private files and then demand payment in return for the password
  • botnets are used for distributed denial of service (DDoS) attacks and other illegal activities. DDoS attacks can flood a website with so much virtual traffic that it shuts down, much like a shop being filled with so many customers you are unable to move.
  • crytptominers will use your computer hardware to mine cryptocurrency, which will slow your computer down
  • hijacking or defacement attacks are used to deface a site or embarrass you by posting pornographic material to your social media.

    How does malware end up on your device?

    According to insurance claim data of businesses based in the UK, over 66% of cyber incidents are caused by employee error. Although the data attributes only 3% of these attacks to social engineering, our experience suggests the majority of these attacks would have started this way.

    For example, by employees not following dedicated IT and information security policies, not being informed of how much of their digital footprint has been exposed online, or simply being taken advantage of. Merely posting what you are having for dinner on social media can open you up to attack from a well trained social engineer.

    QR codes are equally as risky if users open the link the QR codes point to without first validating where it was heading, as indicated by this 2012 study.

    Even opening an image in a web browser and running a mouse over it can lead to malware being installed. This is quite a useful delivery tool considering the advertising material you see on popular websites.

    Fake apps have also been discovered on both the Apple and Google Playstores. Many of these attempt to steal login credentials by mimicking well known banking applications.

    Sometimes malware is placed on your device by someone who wants to track you. In 2010, the Lower Merion School District settled two lawsuits brought against them for violating students’ privacy and secretly recording using the web camera of loaned school laptops.

    What can you do to avoid it?

    In the case of the the Lower Merion School District, students and teachers suspected they were being monitored because they “saw the green light next to the webcam on their laptops turn on momentarily.”

    While this is a great indicator, many hacker tools will ensure webcam lights are turned off to avoid raising suspicion. On-screen cues can give you a false sense of security, especially if you don’t realise that the microphone is always being accessed for verbal cues or other forms of tracking.

    Basic awareness of the risks in cyberspace will go a long the way to mitigating them. This is called cyber hygiene.

    Using good, up to date virus and malware scanning software is crucial. However, the most important tip is to update your device to ensure it has the latest security updates.

    Hover over links in an email to see where you are really going. Avoid shortened links, such as bit.ly and QR codes, unless you can check where the link is going by using a URL expander.

    What to do if you already clicked?

    If you suspect you have malware on your system, there are simple steps you can take.

    Open your webcam application. If you can’t access the device because it is already in use this is a telltale sign that you might be infected. Higher than normal battery usage or a machine running hotter than usual are also good indicators that something isn’t quite right.

    Make sure you have good anti-virus and anti-malware software installed. Estonian start-ups, such as Malware Bytes and Seguru, can be installed on your phone as well as your desktop to provide real time protection. If you are running a website, make sure you have good security installed. Wordfence works well for WordPress blogs.

    More importantly though, make sure you know how much data about you has already been exposed. Google yourself – including a Google image search against your profile picture – to see what is online.

    Check all your email addresses on the website haveibeenpwned.com to see whether your passwords have been exposed. Then make sure you never use any passwords again on other services. Basically, treat them as compromised.

    Cyber security has technical aspects, but remember: any attack that doesn’t affect a person or an organisation is just a technical hitch. Cyber attacks are a human problem.

    The more you know about your own digital presence, the better prepared you will be. All of our individual efforts better secure our organisations, our schools, and our family and friends.

    This article was originally published on The Conversation. Read the original article.

 

How eating mindfully can transform your physical and mental health

The recent trend of mindful eating has once again thrust Fletcherism into the spotlight | Aspioneer
A women picking her food using a fork

In recent years, mindfulness – defined as “a mental state or attitude in which one focuses one’s awareness on the present moment” – has become embedded into our everyday language. Mindfulness has helped many people to develop the skills necessary to manage chronic pain, depressionanxiety, stress and sleeping disorders. It has also become a popular way to change eating behaviours under the term “mindful eating”.

Mindful eating encourages people to pay attention to food with all of their senses, noticing the physical and emotional responses that take place before, during and after an eating experience. Mindful eating teaches people to use wisdom to guide eating decisions, acknowledge food preferences non-judgementally and recognise physical hunger cues.

Although its purpose is not to lose weight, mindful eating can help those struggling to follow long-term diets by correcting their attitudes towards “good” and “bad” foods. Eating mindfully is also said to help reduce, emotional eating and promotes the consumption of smaller portions and fewer calories.

Despite its current popularity among psychologists, nutritionists and dietitians, mindful eating is nothing new. In fact, it can be traced back to the late Victorian era and the work of the US health food enthusiast Horace Fletcher.

Chewing for health

Dubbed the “great masticator”, Fletcher argued that “head digestion” (a person’s emotional state when eating) played a significant role in their food choices. Consequently, it was advisable to chew each mouthful of food 32 times (one for each tooth) in order to improve one’s physical and mental well-being.

In 1913, Fletcher published his first book on the topic: Fletcherism: What It Is or How I Became Young at Sixty. His advice bears a striking similarity to mindful eating guidelines today:

First: wait for a true, earned appetite.

Second: select from the food available that appeals most to appetite, and in the order called for by appetite.

Third: get all the good taste there is in food out of it in the mouth, and swallow only when it practically “swallows itself”.

Fourth: enjoy the good taste for all it is worth, and do not allow any depressing or diverting thought to intrude upon the ceremony.

Fifth: wait; take and enjoy as much as possible what appetite approves; nature will do the rest.

Fletcher claimed that comfort eating caused indigestion. As such, he advised readers to stop and take a moment to notice their feelings before reaching for food automatically. Likewise, Fletcher maintained that an awareness of the food in the mouth led to “wonders of new and pleasant sensations, new delights of taste and new leanings of appetite”. These recommendations to eat intentionally and savour every bite still form central components of contemporary mindful eating.

The art of eating

In line with some of the current claims of mindful eating, Fletcher stated regular practice of what became known as “fletcherising”. This would result in head clarity and increased body strength and stamina, and would fend off illness and tiredness. To demonstrate these assertions, he personally challenged Yale’s top athletes to a competition of strength and endurance, which, at 60 years of age, he is reputed to have won.

Fletcher’s book quickly became a bestseller and his methods were taken up by such eminent figures as Arthur Conan Doyle, Franz Kafka, Theodore Roosevelt and Mark Twain. The cereal producer John Harvey Kellogg also implemented Fletcherism in his Battle Creek Sanitarium in Michigan, US and even hired a quartet to write “The Chewing Song” – as featured in The Road to Wellville – a film about Kellogg to promote its benefits.

Soon, Fletcherism was being advocated for children as a way to teach them to be aware of their bodies and minds. Thanks to avid campaigning from the health reformer, Bernard MacFaddan, it was added to school hygiene textbooks by 1914. Fletcherism was also considered beneficial to prisoners and soldiers, with one criminal claiming that it had enabled him to break the bad habits of a lifetime, as he learnt that “dietary righteousness went hand-in-hand with spiritual well-being”.

Throughout the first half of the 20th century, “munching clubs” emerged across the US and Britain, with “Fletcherites” getting together to eat mindfully in what can be considered an early form of group mindfulness. However, after Fletcher’s death in 1919, the practice slowly lost momentum, and mindful eating was instead replaced with a more unhealthy approach to food – and so was born the calorie counting diet. This was based largely on the consumption of diet pills, chewing gum, laxatives and Lucky Strike cigarettes.

A mindful resurgence

The recent trend of mindful eating has once again thrust Fletcherism into the spotlight. And the similarities between mindful eating and Fletcherism have led researchers to test the effectiveness of 35 versus ten chews per mouthful of food.

They discovered that higher chewing counts reduce food intake, as they result in the production of lower levels of the hormone ghrelin which stimulates appetite. This can make a person more wakeful to their food choices and feel more in control of their eating.

And yet, nutrition today still remains too concerned with which foods to eat and which foods to limit. Whether you call it Fletcherism or mindful eating, this practice demonstrates that learning how to eat is just as important as learning what to eat.

This article was originally published on The Conversation. Read the original article.

The challenges of the gig life

While gig economy platforms might be behaving in accordance with their legal contracts, many workers appear to feel that they are not being treated fairly | Aspioneer
A man riding bicycle with a backpack in the night

In Europe and around the world, many people are delivering fast food on bicycles or acting as taxi drivers in their own cars, not quite employees and not quite self-employed. Following recent legal judgements in France, the UK and in other countries, the contractual status of these “gig workers” is again being questioned. We hear of the benefits of the flexible lifestyles afforded to workers in the “gig economy” but also complaints about precariousness and exploitation.

While gig economy platforms might be behaving in accordance with their legal contracts, many workers appear to feel that they are not being treated fairly. Another form of contract – known as the “psychological contract” – can perhaps help us understand.

Who is working in the gig economy?

All the signs suggest continued growth in the gig economy – paid work in short spells with no or limited commitment from either worker or company. This work frequently involves driving and delivering but can also cover other platform-mediated, short-term work, such as data coding. The work is frequently associated with the young – one survey in North America found that over 70% were aged under 45 and 90% were in higher education.

This youth profile allows proponents of gig work to justify the limited level of security, providing access to work for people with limited professional experience. Furthermore, having another activity, such as being a student, makes the flexibility and autonomy attractive.

Yet such on-demand service work leads to a rise in precariousness more generally, and may increase difficulties for finding secure, full-time work. Recent EU research highlights how damaging this “bogus self-employment” can be for young people trying to start their careers. Here we can see parallels with other forms of precarious work, such as temporary and zero-hour contracts, that are similarly prevalent among young people.

Tensions in the gig economy

Across various platforms of the gig economy, workers are agitating for more. There is evidence of Uber drivers wanting the right to unionise and some Foodora riders want their platform to pay operational costs, such as bike maintenance and mobile data fees. Workers on Amazon Mechanical Turk have petitioned to be treated like humans, not algorithms.

Many focus their complaints around a desire to be treated as employees. They claim to have the obligations of employees but not the benefits. Yet gig platforms are clear that they do not recognise this employee status nor the obligations that would come with such a relationship.

Violating the “psychological contract”

One way of understanding these tensions is through the lens of “psychological contracts”. Human resource management theorists have developed this concept to describe the various unwritten agreements, both explicit and implicit, that workers and employers believe they have with one another. It captures the intangible and hard to quantify elements of the employer-employee relationship, beyond the legal contract. These elements can include considerations of flexibility in the scope of the work, use of initiative, evolution of roles and workers’ loyalty to the employer.

These psychological contracts function when they are more or less balanced. The employee and employer make commitments to one another that make the relationship attractive to both parties. The ongoing nature of the relationship allows trust and reciprocity to develop.

Gig platforms say they want a one-night stand, not a relationship

Of course, not all employment relationships are ongoing. Psychological contract theory tells us that gig work, like other types of short-term work relationships, will tend to be more “transactional” rather than “relational”, with more straightforward exchange and less focus on mutual trust and commitment.

The latter seems to capture the relationship the gig economy platforms wish to foster, particularly in light of claims by platform workers to be employees. The platforms have tailored the terms and language of their contracts in order to demand a minimum of their workers and justify minimal obligations in return. Indeed for some workers, these transactional contracts are ideal.

A rough deal, or a broken one?

Yet protests and legal actions show that not all workers are happy. Psychological contract theory might offer two potential explanations for the tensions of the gig economy – workers are unhappy with the terms of the psychological contract, or they believe the platforms have violated the contract.

It is clear that some workers find the terms unfair. While it might be argued that workers have consented to these terms, consent becomes problematic when there are few better options. Young people in some countries face a lack of jobs, pushing them into low quality, poorly paid and precarious gig work.

It is also possible that gig workers initially consent to the psychological contract, but then believe the platform violates the terms. For instance, in 2016 Deliveroo encouraged workers to move to a new service contract, claiming that it would work out better for riders – when some found insufficient volume of work to maintain their earnings, the psychological contract was arguably violated.

A perception of reciprocity

Perception is an important feature of psychological contracts and the perceptions of the worker and firm are not necessarily always aligned. Since the terms of such psychological contracts are “in the eye of the beholder”, one party can believe that the other is failing to live up to obligations that the other does not recognise.

These perceptions can be informed by the legal and social context, and notions of what is fair. For example, ongoing research at Grenoble Ecole de Management has found that delivery riders perceive good customer service, such as rectifying errors, to be part of the job, yet they are not paid for such additional tasks. The platforms benefit from this commitment. On the other side, gig workers often expect, based on common-sense ideas of fairness, that platforms would not deactivate an account due to unreasonable passenger ratings. When this happens, the sense of betrayal can be intense. Reciprocity is what makes more conventional employer-employee relationships function and is perhaps part of what is missing for gig workers today.

This article was originally published on The Conversation. Read the original article.

Deep fakes could destroy social trust- here’s how to tackle it

Deepfakes are scary because they allow anyone’s image to be co-opted, and call into question our ability to trust what we see | Aspioneer
Fake news written on wooden blocks

It has the potential to ruin relationships, reputations and our online reality. “Deepfake” artificial intelligence technology promises to create doctored videos so realistic that they’re almost impossible to tell from the real thing. So far it has mostly been used to create altered pornographic clips featuring celebrity women’s faces but once the techniques are perfected, deepfake revenge porn purporting to show people cheating on their partners won’t be far behind.

But more than becoming a nasty tool for stalkers and harassers, deepfakes threaten to undermine trust in political institutions and society as a whole. The White House recently justified temporarily banning a reporter from its press conferences using reportedly sped up genuine footage of an incident involving the journalist. Imagine the implications of seeing ultra-realistic but artificial footage of government leaders planning assassinations, CEOs colluding with foreign agents or a renowned philanthropist abusing children.

So-called fake news has already increased many people’s scepticism towards politicians, journalists and other public figures. It is becoming so easy to create entirely fictional scenarios that we can no longer trust any video footage at face value. This threatens our political, legal and media systems, not to mention our personal relationships. We will need to create new forms of consensus on which to base our social reality. New ways of checking and distributing power – some political, some technological – could help us achieve this.

Fake scandals, fake politicians

Deepfakes are scary because they allow anyone’s image to be co-opted, and call into question our ability to trust what we see. One obvious use of deepfakes would be to falsely implicate people in scandals. Even if the incriminating footage is subsequently proven to be fake, the damage to the victim’s reputation may be impossible to repair. And politicians could tweak old footage of themselves to make it appear as if they had always supported something that had recently become popular, updating their positions in real time.

There could even be public figures who are entirely imaginary, original but not authentic. Meanwhile, video footage could become useless as evidence in court. Broadcast news could be reduced to people debating whether clips were authentic or not, using ever more complex AI to try to detect deepfakes.

But the arms race that already exists between fake content creators and those detecting or debunking disinformation (such as Facebook’s planned fake news “war room”) hides a deeper issue. The mere existence of deepfakes undermines confidence and trust, just as the possibility that an election was hacked brings the validity of the result into question.

While some people may be taken in by deepfakes, that is not the real problem. What is at stake is the underlying social structure in which we all agree that some form of truth exists, and the social realities that are based on this trust. It is not a matter of the end of truth, but the end of the belief in truth – a post-trust society. In the wake of massive disinformation, even honest public figures will be easily ignored or discredited. The traditional organisations that have supported and enabled consensus – government, the press – will no longer be fit for purpose.

Blockchain trust

New laws to regulate the use of deepfakes will be important for people who have damaging videos made of them. But policy and law alone will not save our systems of governance. We will need to develop new forms of consensus, new ways to agree on social situations based on alternative forms of trust.

One approach will be to decentralise trust, so that we no longer need a few institutions to guarantee whether information is genuine and can instead rely on multiple people or organisations with good reputations. One way to do this could be to use blockchain, the technology that powers Bitcoin and other cryptocurrencies.

Blockchain works by creating a public ledger stored on multiple computers around the world at once and made tamper-proof by cryptography. Its algorithms enable the computers to agree on the validity of any changes to the ledger, making it much harder to record false information. In this way, trust is distributed between all the computers who can scrutinise each other, increasing accountability.

More democratic society

We can also look to more democratic forms of government and journalism. For example, liquid democracy allows voters to vote directly on each issue or temporarily assign their votes to delegates in a more flexible and accountable way than handing over full control to one party for years. This would allow the public to look to experts to make decisions for them where necessary but swiftly vote out politicians who disregarded their views or acted dishonestly, increasing trust and legitimacy in the political system.

In the press, we could move towards more collaborative and democratised news reporting. Traditional journalists could use the positive aspects of social media to gather information from a more diverse range of sources. These contributors could then discuss and help scrutinise the story to build a consensus, improving the media’s reputation.

The problem with any system that relies on the reputation of key individuals to build trust is how to prevent that reputation from being misused or fraudulently damaged. Checks such as Twitter’s “blue tick” account verification for public figures can help, but better legal and technical protections are also needed: more protected rights to privacy, better responses to antisocial behaviour online, and better privacy-enhancing technologies built in by design.

The potential ramifications of deepfakes should act as a call to action in redesigning systems of trust to be more open, more decentralised and more collective. And now is the time to start thinking about a different future for society.

This article was originally published on The Conversation. Read the original article.

Mounting plastic in oceans is not the fault of the global south

Asian countries have long been in the business of processing the plastic waste that comes from the global north. But China’s January 2018 ban on imported waste (much of which arrived from the global north) completely disrupted the plastic waste trade | Aspioneer
plastic cup in ocean

Our oceans are littered with plastics. Indeed, we are regularly exposed to images and stories of whales and sea turtles choking to death on plastic trash. Ocean plastic is clearly a problem but what is the solution?

On the surface, it seems clear, plastic must be reduced or eliminated at its source. Here’s why: Ninety per cent of ocean plastics come from 10 rivers, eight of which are in Asia. And the five most plastic polluting countries are China, Indonesia, Philippines, Thailand and Vietnam.

This agrees with our experience along Vietnam’s coast, where there are piles of plastic on the beaches, and where we research the impact of marine plastic debris on coastal livelihoods.

However, when you look below the surface, you see that these arguments blame the plastic tide on consumers in the global south — without mention of those living in the global north. It is as if they have no responsibility for the crisis.

If we understand waste, not as something produced by the actions of a group of individuals, but rather a product of socioeconomic systems that contribute to making waste and encourages wasting, problems with these dominant explanations arise. We start to see that Western consumers are part of the problem and cannot be absolved of their responsibility.

Unequal waste flows

Asian countries have long been in the business of processing the plastic waste that comes from the global north. But China’s January 2018 ban on imported waste (much of which arrived from the global north) completely disrupted the plastic waste trade.

News reports show that Canada, the United States, the United Kingdom and Australia scrambled through much of 2018 to find a solution to this problem. Much of the waste was diverted to neighbouring countries, including Indonesia, the Philippines, Thailand, Malaysia and Vietnam — four of them part of the so-called most polluting countries.

These countries are now overwhelmed by the sheer volume of plastics. Vietnam, for instance, announced it would ban the import of scrap material in early 2019, in response to concerns by residents about worsening environmental conditions and the health of locals.

Exporting problems and inequality

Some individuals, mostly in the global north, are trying to reduce their plastic consumption by avoiding cheap plastic straws and single-use bags or using only durable and sustainably produced items.

Unfortunately, these “solutions” perpetuate inequality, nationally and internationally. Not everyone can afford a bamboo toothbrush. In addition, durable options are often made of multiple components that are harder to separate for recycling once they enter the waste stream — and once they do, are slower to breakdown.

This focus on individual action also overlooks the fact that corporations using plastic packaging are subsidised through publicly funded municipal waste programs. And lighter plastic packaging equals cheaper global shipping — further encouraging production and consumption of more cheap plastic.

But by far the biggest consequence of our consumer lifestyle is the creation of wasteful spaces. As contaminated oceans and filthy landscapes become more and more common, the increased attention to improper waste-management practices in “polluting countries” has created the perception that they are mismanaging and misusing plastics. Those on the receiving end of the global north’s waste pay the ultimate price.

Tidying things up

The export of waste from the global north to the global south has been controversial for more than 30 years. The United Nations Development Programme (UNDP) argued in 1989 that this perpetuates inequality and supports the movement of waste across borders. Recently the UNDP proposed revising the wording of the Basel Convention, so that imported plastic waste would no longer be called “green waste,” giving the receiving country the right to refuse polluted or mixed plastic waste that it could not manage safely.

Although this amendment has not been approved, doing so would encourage a better understanding of the source of plastics in our oceans instead of blaming the developing world for their improper management.

Make no mistake, when we throw out that single-use cup, recycle plastic cauliflower wrappers or buy into the current Marie Kondo obsession of keeping only “joyful things,” we are supported by structures of global inequality. Ethical consumption is still consumption, and there may not always be another country or landfill available for our discarded stuff.

It may seem right to encourage recycling, but there are larger implications. Recycling will not fix the problem of ocean plastics, and pointing the finger at the global south for poor waste management practices simply reproduces colonial habits of exporting problems and victim-blaming. True solutions rest in reduced consumption and more equitable waste-management practices including rewarding sustainable ideas and forcing corporations to pay to clean up their mess.

This article was originally published on The Conversation. Read the original article.

 

Personal DNA test for research, but what about privacy risks?

Your DNA has become a valuable commodity. Companies such as 23andMe may charge you for an analysis of your genetic profile, but they make their real money from selling that data on to other companies | Aspioneer
DNA molecular model

Your DNA has become a valuable commodity. Companies such as 23andMe may charge you for an analysis of your genetic profile, but they make their real money from selling that data on to other companies.

Now healthcare providers are following suit by encouraging patients to take genetic tests that will create databases ostensibly for medical research. Britain’s National Health Service (NHS) recently announced that it was launching such a scheme in an attempt to build a database of anonymised genetic data for researchers.

But recent reports that Our Lady’s Children’s Hospital, Crumlin in Dublin – Ireland’s largest children’s hospital – allegedly shared patient DNA data with a private firm without appropriate consent highlights the potential risk that comes with giving up your genetic records. Your DNA contains sensitive information that can be used to make important personal decisions about you and your family members. When you hand over these details to a large database – whoever is building it – you are ultimately risking it being used in ways you can’t foresee and which aren’t always to your benefit.

The first questions are where your data will end up and who will have access to it. The NHS is attempting to keep control of the genetic data it gathers by sharing it with researchers at its own company, Genomics England. But there has been no indication of what purposes the data can be used for, or what limits will be placed on its use or transfer to other research centres or companies. In the past, Genomics England met with Google to discuss how the tech firm might help analyse genetic data gathered under a previous scheme, the 100,000 Genomes Project.

A spokesperson for Genomics England told The Conversation that it had “no formal contractual relationship between Genomics England and Google”. However, it said: “We have a mutual interest in secure data storage and we have meetings from time to time. As part of our mandate to stimulate the UK genomics industry, we are in touch with Google Ventures. They invest in life sciences companies which may be interested in working with us.”

The recent Irish example of data transfer apparently without appropriate consent also reminds us that agreements and rules over who can access data can be broken. In January 2019, an investigation was launched into the alleged supply of 1,500 DNA samples from the Crumlin children’s hospital to Genomics Medicine Ireland (GMI) without proper authorisation from patients.

If these allegations are true, it would represent a breach of European data protection law, which requires explicit consent for the processing of DNA data. What is perhaps more of a problem is that even when people are told what will happen with their data, they may not understand those uses or its potential consequences.

Initiatives such as the NHS project are justified by claims that they offer an efficient way to diagnose rare or undiscovered illnesses, speeding up treatment and improving patient outcomes. More broadly, proponents argue, sharing DNA data can allow researchers to spot patterns that would otherwise go unidentified, increasing scientific understanding and aiding in the development of treatments.

But having your DNA sequenced isn’t just a way of finding out if you are at risk of a disease or making an altruistic contribution to an abstract research project. DNA data exposes our most inherent characteristics, revealing ethnic or racial groupings, as well as outlining current and future health issues. Some people have even tried to link DNA tests to intelligence.

Concerns about linking individuals to the characteristics revealed by their DNA are usually countered by claims that the data is anonymised. But both practical experience and academic work have shown that anonymised data can often be reassociated with the people it was collected from.

So sharing your genetic information could expose you to potential discrimination if it ends up with the wrong people or is used for the wrong purposes. Being offered different health insurance coverage and at different prices is the most obvious risk. But depending on who buys the data, pharmaceutical companies, employers and even government authorities could access your DNA and make decisions based on it.

Democratic governments can’t typically gather DNA evidence without the permission of a judge or via another legal procedure. But in the case of the “Golden State Killer”, US law enforcement agencies used DNA data from a public genealogy database to obtain evidence they wouldn’t otherwise have been able to collect. This raises concerns about the willingness of governments to use genetic records originally made to explore people’s ancestry for a very different purpose.

Giving away family secrets

The Golden State Killer case is all the more important because it highlights the most fundamental issue with DNA-sharing initiatives. When you share your DNA, you’re also sharing data about your entire family, who haven’t necessarily consented. The Golden State Killer didn’t get a DNA test but one of his relatives did. When enough people share their DNA, the genetic profile of entire communities becomes available.

study of the database that was used to catch the killer estimated that it contained the profiles of 0.5% of the US population, yet this represented family members (third cousin or closer) of 60% of white Americans. With 2% of the population, that figure would increase to 90%.

GMI currently plans to build the world’s largest whole-genome database of some 400,000 participants – roughly a tenth of Ireland’s population – from a presence in all the country’s major hospitals. This would likely give the firm information on almost every family group in Ireland and a huge proportion of the Irish diaspora (estimated at 70m), enabling it to identify the most private characteristics of a global population.

This shows how, when some people allow their DNA data to be shared, it could expose both them and their families to risk and erode the rights of everyone else, meaning we all have a stake in how genetic records are shared. Organisations must be required to be clearer about who will use the DNA data they collect, and for what to prevent risk of misuse.

This article was originally published on The Conversation. Read the original article.

The real cost of cyberattacks

It must be acknowledged that cybersecurity now contributes to a business’s performance. Investing in effective IT tools has become an absolute necessity | Aspioneer
Text 'cybersecurity' on a black background

The world of cybersecurity has changed drastically over the past 20 years. In the 1980s, information systems security was a rather confidential field with a focus on technical excellence. The notion of financial gain was more or less absent from attackers’ motivations. It was in the early 2000s that the first security products started to be marketed: firewalls, identity or event management systems, detection sensors, etc. At the time these products were clearly identified, as was their cost, which was high at times. Almost 20 years later, things have changed: attacks are now a source of financial gain for attackers.

What is the cost of an attack?

Today, financial motivations are usually behind attacks. An attacker’s goal is to obtain money from victims, either directly or indirectly, whether through requests for ransom (ransomware), or denial of service. Spam was one of the first ways to earn money by selling illegal or counterfeit products. Since then, attacks on digital currencies such as bitcoin have now become quite popular. Attacks on telephone systems are also extremely lucrative in an age where smartphones and computer technology are ubiquitous.

It extremely difficult to assess the cost of cyber-attacks due to the wide range of approaches used. Information from two different sources can however provide insight to estimate the loss incurred: that of service providers and that of the scientific community.

On the service provider side, a report by American service provider Verizon entitled, “Data Breach Investigation Report 2017” measures the number of records compromised by an attacker during an attack but does not convert this information into monetary value. Meanwhile, IBM and Ponemon indicate an average cost of $141 US per record compromised, while specifying that this cost is subject to significant variations depending on country, industrial sector etc. And a report published by Accenture during the same period assesses the average annual cost of cybersecurity incidents as approximately $11 million US (for 254 companies).

How much money do the attackers earn?

In 2008, American researchers tried to assess the earnings of a spam network operator. The goal was to determine the extent to which an unsolicited e-mail could lead to a purchase. By analysing half a billion spam messages sent by two networks of infected machines (botnet), the authors estimated that the hackers who managed the networks earned $3 million US. However, the net profit is very low. Additional studies have shown the impact of cyber-attacks on the cost of shares of corporate victims. This cybersecurity economics topic has also been developed as part of a Workshop on the Economics of Information Security.

The figures may appear to be high, but as is traditionally the case for Internet services, attackers benefit from a network effect in which the cost of adding a victim is low, but the cost of creating and installing the attack is very high. In the case studied in 2008, the e-mails were sent using the Zeus robots network. Since this network steals computing resources from the compromised machines, the initial cost of the attack was also very low.

In short, the cost of cyberattacks has been a topic of study for many years now. Both academic and commercial studies exist. Nevertheless, it remains difficult to determine the exact cost of cyber-attacks. It is also worth noting that it has historically been greatly overestimated.

The high costs of defending against attacks

Unfortunately, defending against attacks is also very expensive. While an attacker only has to find and exploit one vulnerability, those in charge of defending against attacks have to manage all possible vulnerabilities. Furthermore, there is an ever-growing number of vulnerabilities discovered every year in information systems. Additional vulnerabilities are regularly introduced by the implementation of new services and products, sometimes unbeknownst to the administrators responsible for a company network. One such case is the “bring your own device” (BYOD) model. By authorizing employees to work on their own equipment (smartphones, personal computers) this model destroys the perimeter defence that existed a few years ago. Far from saving companies money, it introduces an additional dose of vulnerability.

The cost of security tools remains high as well. Firewalls or detection sensors can cost as much as 100,000 euros and the cost of a monitoring platform to manage all this security equipment can cost up to ten times as much. Furthermore, monitoring must be carried out by professionals and there is a shortage of these skills in the labour market. Overall, the deployment of protection and detection solutions amounts to millions of euros every year.

Moreover, it is also difficult to determine the effectiveness of detection centres intended to prevent attacks because we do not know the precise number of failed attacks. A number of initiatives, such as Information Security Indicators, are however attempting to answer this question. One thing is certain: every day information systems can be compromised or made unavailable, given the number of attacks that are continually carried out on networks. The spread of the malicious code Wannacry proved how brutal certain attacks can be and how hard it can be to predict their development.

Unfortunately, the only effective defence is often updating vulnerable systems once flaws have been discovered. This creates few consequences for a work station, but is more difficult on servers, and can be extremely difficult in high-constraint environments (critical servers, industrial protocols etc.) These maintenance operations always have a hidden cost, linked to the unavailability of the hardware that must be updated. And there are also limitations to this strategy. Certain updates are impossible to implement, as is the case with Skype, which requires a major software update and leads to uncertainty in its status. Other updates can be extremely expensive, such as those used to correct the Spectre and Meltdown vulnerabilities that affect the microprocessors of most computers. Intel has now stopped patching the vulnerability in older processors.

A delicate decision

The problem of security comes down to a rather traditional risk analysis, in which an organization must decide which risks to protect itself against, how subject it is to risks, and which ones it should insure itself against.

In terms of protection, it is clear that certain filtering tools such as firewalls are imperative in order to preserve what is left of the perimeter. Other subjects are more controversial, such as Netflix’s abandoning of anti-virus and decision to rely instead on massive data analysis to detect cyber-attacks.

It is very difficult to assess how subject a company is to risks since they are often the result of technological advances in vulnerabilities and attacks rather than a conscious decision made by the company. Attacks through denial of service, like the one carried out in 2016 using the Mirai malware, for example, are increasingly powerful and therefore difficult to counter.

The insurance strategy for cyber-risk is even more complicated, since premiums are extremely difficult to calculate. Cyber-risk is often systematic since a flaw can affect a large number of clients. Unlike the risk of natural catastrophe, which is limited to a region, allowing insurance companies to spread the risk out over its various clients and calculate a future risk based on risk history, computer vulnerabilities are often widespread, as can be seen in recent examples such as the Meltdown, Spectre and Krack flaws. Almost all processors and wi-fi terminals are vulnerable.

Another aspect that makes it difficult to estimate risks is that vulnerabilities are often latent, which means that only a small community is aware of them. The flaw used by the Wannacry malware had already been identified by NSA, the American Security Agency (under the name EternalBlue). The attackers who used the flaw learned about its existence from documents leaked from the American government agency itself.

How can security be improved? The basics are still fragile

Faced with a growing number of vulnerabilities and problems to solve, it seems essential to reconsider the way Internet services are built, developed and operated. In other industrial sectors the answer has been to develop standards and certify products in relation to these standards. This means guaranteeing smooth operations, often in a statistical manner. The aeronautics industry, for example, certifies its aircraft and pilots and has very strong results in terms of safety. In a more closely-related sector, telephone operators in the 1970s guaranteed excellent network reliability with a risk of service disruption lower than 0.0001%.

This approach also exists in the Internet sector with certifications based on common criteria. These certifications often result from military or defence needs. They are therefore expensive and take a long time to obtain, which is often incompatible with the speed-to-market required for Internet services. Furthermore, standards that could be used for these certifications are often insufficient or poorly suited for civil settings. Solutions have been proposed to address this problem, such as the CSPN certification defined by the ANSSI (French National Information Systems Security Agency). However, the scope of the CSPN remains limited.

It is also worth noting the consistent positioning of computer languages in favour of quick, easy production of computer code. In the 1970s languages that chose facility over rigor came into favour. These languages may be the source of significant vulnerabilities. The recent PHP case is one such example. Used by millions of websites, it was one of the major causes of SQL injection vulnerabilities.

The cost of cybersecurity, a question no longer asked

In strictly financial terms, cybersecurity is a cost centre that directly impacts a company or administration’s operations. It is important to note that choosing not to protect an organization against attacks amounts to attracting attacks since it makes the organization an easy target. As is often the case, it is therefore worthwhile to provide a reminder about the rules of computer hygiene.

The cost of computer flaws is likely to increase significantly in the years ahead. And more generally, the cost of repairing these flaws will rise even more dramatically. We know that the point at which an error is identified in a computer code greatly affects how expensive it is to repair it: the earlier it is detected, the less damage is done. It is therefore imperative to improve development processes in order to prevent programming errors from quickly becoming remote vulnerabilities.

IT tools are also being improved. Stronger languages are being developed. These include new languages like RUST and GO, and older languages that have come back into fashion, such as SCHEME. They represent stronger alternatives to the languages currently taught, without going back to languages as complicated as ADA for example. It is essential that teaching practices progress in order to factor in these new languages.

The conversation wasted time, stolen or lost data… We have been slow to recognize the loss of productivity caused by cyber-attacks. It must be acknowledged that cybersecurity now contributes to a business’s performance. Investing in effective IT tools has become an absolute necessity.

This article was originally published on The Conversation. Read the original article.

Recent Posts