Dan Cohen AUTHOR The Seneca County Industrial Development Agency (IDA) is nearing its selection of a buyer for 7,000 acres of undeveloped land at the former Seneca Army Depot in New York’s Finger Lakes District.A nine-member working group consisting of members of the IDA and the county Board of Supervisors is considering 16 bids covering a range of end uses that were submitted prior to a February deadline.“It’s getting closer,” Steve Brusso, chairman of the Depot Bid Review Workgroup, said last week before the panel began to meet in executive session. “If all goes well, we could have a decision in June.”Only one bidder’s identity has become public as the IDA has decided to deliberate behind closed doors. On Tuesday, the Fingers Lakes Times reported that its request for a list of bidders and bid amounts was denied by the agency.An IDA spokeswoman told the paper that revealing the names of the bidders could harm the integrity of the selection process.“For example, simply releasing the names of the bidders would give interested parties, particularly the other bidders, the ability to discern the relative viability of the bids and, as such, undermine the agency’s negotiating position,” Kelly Kline told the paper.The one publicly announced bid is a partnership between a conservation group and the town of Varick seeking to preserve the depot’s herd of rare white deer. That proposal calls for converting the property into an ecotourism center focused on wildlife conservation.The IDA decided to sell the remaining acreage at the depot, which was shuttered in 2000, now that the Army’s caretaker responsibilities are winding down. The agency does not have the resources to adequately keep and maintain the property, and wants to return it to the tax rolls to increase the county’s tax base and provide an economic asset to the region.
The Indian Institute of Advanced Studies (IIAS), a research institution in Shimla, has published another edition of Myanmar’s pro-democracy icon Aung San Suu Kyi, the institute’s official said on Friday. ‘The second edition of Suu Kyi’s book Burma and India: Some Aspects of Intellectual Life under Colonialism has been brought out as a paperback edition and is priced at Rs 195,’ IIAS director Peter Ronald Desouza said in a statement. The bookis based on the manuscript Suu Kyi submitted after completion of her fellowship at the IIAS in 1987. The book, first published in 1990, is about comparative study of intellectual life under colonialism in the two countries. It describes the varying responses of India and Burma during British colonialism, responses which reflect the changing social structure and character of the two societies. It also discusses the Buddhist influence from India on Burma and the inability of Burmese society to resist the colonial onslaught in contrast to India, which developed a more substantial response. The opposition leader of Myanmar stayed at the IIAS with her husband Michael Aris, who was also a fellow, and their two sons. ‘It was through the ambassador of India to Burma that Suu Kyi could be sent the re-typed and proof-read version of her book to make the necessary changes, which she did,’ Desouza said. ‘She chose the cover design,’ he added. On the request of Suu Kyi, he said, IIAS would send some copies of the book to public libraries and universities across India. The IIAS is a premier advanced research institution in the field of humanities and social sciences.
7 min read April 27, 2018 Opinions expressed by Entrepreneur contributors are their own. There is something uniquely unsettling about watching footage of Atlas, the robot developed by Boston Dynamics. Its human-like movements suggest a sense of body-awareness and intuition inherent in human beings, but it is distinctly not human.The technologies behind AI and robotics will continue advancing. As they become ever more sophisticated, we must ask ourselves, how human-like should AI be? Where should we allow the boundaries to continue to blur, and where do we need to draw a clear line in the sand? It’s a challenging conundrum made only more complicated by headlines about robot citizenship and speculation about an impending apocalypse.When we evaluate AI’s evolving role in customer experience, we can begin to answer this question. The early implementation of chatbots serves as a small window into the world of human and bot interactions, and a case study for how the technology should be shaped moving forward.Related: Top 10 Best Chatbot Platform Tools to Build Chatbots for Your BusinessWhen mirroring human behavior makes sense.Early hype around chatbots was met with a lot of disappointing groans by the many consumers prematurely introduced to bots. Initial bots were rightfully criticized for being ineffective and often incapable of performing the basic tasks they were designed to do. What was probably most frustrating for customers dealing with said bots, however, was their lack of empathy. If a customer is taking the time to contact a brand for help with something, they really want to feel understood. The irony here is that machines are not particularly well versed in feelings (in their defense, I know plenty of humans who are not very well versed in understanding feelings).As technology develops, AI will need to become more emotionally aware to truly understand human requests. Empathy is a must as companies increasingly seek to communicate with consumers through automated solutions. Chatbots have come a long way from the early days. An estimated 16 percent of Americans (that’s 39 million people) now own a smart speaker. But even in these more advanced solutions, there is a fairly chronic issue of tone-deafness. The human-bot relationship is the new normal, so we must think critically about the possible long term impact of tone deaf AI.Consider that when you make a request of Alexa, she will not say “you’re welcome” if you thank her. On the one hand it is encouraging to know she is no longer “listening” after a command, but on the other, many are concerned that we are setting a precedent of rudeness and callousness for a future generation. A more nefarious example is found in the lack of consequence for being rude to bots, and more specifically, in the way bots respond to things like sexual harassment. In the case of Alexa, she will now “disengage” if asked to do something inappropriate, but as the Quartz writer Leah Fessler points out, her North Star is to please. That’s problematic when we consider that more complex bots like Sophia will be among us soon.To avoid bots that perpetuate a tone-deaf society, we need to train AI on empathy. This is not a simple task, but it is possible. Empathy is a “soft skill,” but it is a skill nonetheless. So training AI on empathy can be approached in the same way we train AI on anything — with a digestible data set. This would include training itself to “hear” data points such as tone of voice (both written and verbalized), words that express sentiments and emotions, and even how a human responds temporally, or over time, to interactions. Is the human transitioning from agitated to happy, or the inverse? A bot needs to know the difference so it can temper its response to avoid agitating a calm human — or calm down an agitated human. Understanding emotions and feelings is a critical part of understanding humans and how they act, what they want and how to appropriately respond. If a machine can’t learn empathy, it can’t understand how empathy affects human requests and actions, and it can’t create optimum outcomes.Training AI on empathy means teaching machines to extract subjective emotions, feelings and sentiments from conversations with humans. AI models can learn, just as humans learn, how those qualitative feelings impact needs, responses, actions and results. An angry human, using anger trigger words (including obscenities), a loud voice, changes in speaking cadence or choice of words, can demonstrate a need to be heard, have their feelings validated, perhaps have issues escalated. They want a quick and effective response. A happy human who wants to talk is more comfortable taking time, chit-chatting, maybe even having a friendly conversation with a bot about the marginally relevant topics like the weather, or a sports team. Until AI can recognize emotion and empathy it can’t learn how emotion affects human behavior, needs and desired responses.Related: The Next Addition to Your Marketing Department Should Be a ChatbotWhen bots should just be bots.The counter-narrative to a future where the line between bot and human is indistinguishable is that humans are flawed. And why would we want to recreate AI in our exact likeness, when the promise of AI is helping us go beyond our limitations? The hard line in the sand we should draw between bots and humans is that of biased decision making.One of the greatest strengths bots possess is their lack of shame or ego. So often these human emotions are at the heart of being incapable of recognizing our own biases, in whatever shape they take: racist opinions (whether overt or unconscious), sexist assumptions, or whatever -ism we have all been guilty of at some point or another. Taking advantage of starting as a “fresh slate,” we need to ensure bots do not simply learn to respond differently to prejudicial attributes such as male vs female voices or language of origin. There are many attributes that AI may determine affect human responses and needs, but humans have an obligation not to let bots overly generalize based on gender, language, or nation of origin, just as we ask our human employees not to discriminate.Related: How Chatbots Save Time and Change How Business Gets DoneIn the last two years, we have watched Sophia the robot evolve from agreeing to destroy the human race, to telling a joke, learning to walk, and most recently joining Will Smith on a “date.” Being such a public symbol of AI, it’s clear the technology is improving in leaps and bounds, but still has quite a way to go. Most of us will not be interfacing with the likes of Sophia, but many of us will continue to interact with Siri, Alexa and the other branded chatbots of the world. This ever-expanding proliferation requires that we continue to push for more empathy in AI and less bias. In this way we will hopefully arrive in a place where AI can seamlessly interact with humans without falling victim to human error. Finding that balance will be tricky, but the resulting harmony will have a hugely positive impact on consumers and companies alike. Free Workshop | August 28: Get Better Engagement and Build Trust With Customers Now Enroll Now for Free This hands-on workshop will give you the tools to authentically connect with an increasingly skeptical online audience.