Machine Learning in the Workplace

For my entry today, I chose a fascinating article by Kaveh Waddell of the Atlantic that discusses machine learning and algorithmic approaches to employee evaluation in the workplace. Similar to my decision last semester to choose Big Data applications in the management of organizations in my INSC 560 course (Management of Information Organizations), my choice of article reflects an interest how technology is revolutionizing human resources in the corporate workplace.

Kaveh Waddell’s argument with this article is that machine learning applications, specifically in the blossoming field of sentiment analysis, will have a growing presence in the professional workplace, particularly of larger companies and organizations. His discussion within the work is well-balanced, offering both a positive and negative spin on the technology, but ultimately in a positive-neutral light.

Sentiment analysis—the primary focus of this text—is merely a subfield of the much larger technology industry developing around machine learning (aka Big Data) applications. As Waddell points out, the subfield of sentiment analysis developed from marketing-analytics of market research (2016). These days, however, the technology surrounding sentiment analysis is becoming increasingly turned inward towards examining the behavioral habits, emotions, and communication of an organization’s employees (Waddell, 2017). This has clear privacy concerns that are, unfortunately, not addressed virtually at all in Waddell’s article and must be discussed in further detail here.

In a chapter titled “From Data Privacy to Location Privacy” from the text, Machine Learning in Cyber Trust, chapter authors Ting Wang and Ling Liu present an in-depth analysis of the privacy concerns growing with machine learning and the many approaches and research that has taken place to address these concerns.

The data collected in machine learning applications in relation to privacy concerns are generally classified into three categories: identity attributes, quasi-identity attributes, and sensitive attributes (Wang, 2009, p. 217). “Identity attributes” refers to information that directly identifies an individual (e.g. social security number, voter-registration, etc.); “quasi-identity attributes” is information that when culminated may result in discovering sensitive information or directly identifying an individual (e.g. zip code, address, place of employment, etc.); “sensitive attributes” is typically information an individual wish to keep private for personal reasons (e.g. health concerns, criminal records, etc.) (Wang, 2009, p. 220). To prevent privacy breaches, theoretical models typically take one of two approaches: a) developing a model that aims to limit data measuring based on a set of criteria that prevent the information from ever being aggregated; b) developing a data manipulation / transformation method that can filter sensitive information of a data set before its publication (i.e. the data is collected but then filtered) (Wang, 2009, p. 218). Thus, to prevent sensitive information from being uncovered by organizations or individuals with malicious intent, the organizations must develop a model that protects the privacy of the individual within the legal requirements of the law and Constitution.

Just as there are privacy concerns, the benefits of machine learning algorithms cannot be understated. As Waddell’s article implies with case examples, policy changes that can take upwards of many months may take place, instead, in real-time (Waddell, 2016). Additionally, employee grievances that might not normally appear during routine evaluations may become evident through analytics that monitors the employee’s behaviors (Waddell, 2016).

In the traditional sense, machine learning “aids knowledge workers and customers alike” (Petrocelli, 2017). The means that it aids vary according by application, however, a common theme is the reduction of errors. As Petrocelli points out in his article, errors made by customers can throw them into longer queue lines for a simple issue, and employees can produce drastic errors that put the business at risk (2017). By distilling massive quantities of information into a more meaningful and manageable form that may be used in decision-making by managers (Petrocelli, 2017).

There are clear pros and cons to both sides of the argument regarding the use of machine learning technologies towards employees. Privacy concerns will always need addressing if such technologies are to become common in the workplace, otherwise, company policies will clash in the court system for years to come.

 

References

Waddell, Kaveh (2016, September 29). The Algorithms that Tell Bosses How Employees Are Feeling. The Atlantic. Retrieved 2/26/2018 from https://www.theatlantic.com/technology/archive/2016/09/the-algorithms-that-tell-bosses-how-employees-feel/502064/.

Wang, Ting, Liu, Ling (2009). From Data Privacy to Location Privacy. In Yu, P., Tsai, Jeffrey J. P. (Eds.), Machine Learning in Cyber Trust: Security, Privacy, and Reliability 217-246. Boston, MA: Springer-Verlag US.

Petrocelli, Tom (2017, October 2). When Machine Learning Benefits Employees and Customers Alike. Retrieved 2/26/2018 from https://www.cmswire.com/customer-experience/when-machine-learning-benefits-employees-and-customers-alike/.

Leave a comment