ISPR Presence News

Monthly Archives: February 2019

Call: 2nd ACM PETRA Workshop on Social Robots: A Workshop on the Past, the Present and the Future of Digital Companions

Call for Papers

The Second ACM PETRA Workshop on Social Robots: A Workshop on the Past, the Present and the Future of Digital Companions
At PErvasive Technologies Related to Assistive Environments (PETRA) 2019
Rhodes, Greece
June 5-7, 2019
http://www.petrae.org/workshops/socialrobots.html

Read more on Call: 2nd ACM PETRA Workshop on Social Robots: A Workshop on the Past, the Present and the Future of Digital Companions…

Posted in Calls | Leave a comment

GIBLIB: First VR app and accredited degree program based on virtual live and recorded operating room experiences

[GIBLIB is advancing the use of presence experiences for surgical education. The company‘s press release below announcing the first VR app for live OR experiences is followed by a Cedars-Sinai story about the medical center’s first accredited continuing medical education course filmed in VR. See Medgadget for an interview with the company’s CEO. –Matthew]

[Image: Source: Medgadget]

GIBLIB Launches First VR App for Live Operating Room Experiences through the Oculus Store

Emulates Real Surgeries For Medical Students and Practicing Physicians To Learn the Latest Surgical Techniques in 360-degree Virtual Reality

On January 29, 2018 – GIBLIB, the streaming media platform offering the largest library of on-demand medical lectures and surgical videos in 4K and 360-degree virtual reality, today announced the launch of the first VR app for an immersive operating room experience to enhance surgical education. Through the app, GIBLIB provides medical students and practicing physicians the most immersive and accessible operating room (OR) experience anywhere, at anytime.

The app supports GIBLIB’s direct-to-consumer offering, which requires consumers only have a subscription to the Company’s streaming media service and an Oculus Go headset or Oculus Rift System. GIBLIB exclusively partners with leading academic medical centers, Cedars-Sinai and Stanford Children’s Hospital, to provide engaging and informative content from the best surgeons and physicians in the world, which is now accessible through the VR app that connects to the Oculus Store.

Surgical education is largely determined by OR access in order to learn new and updated procedural techniques from top experts. This helps surgeons continuously provide the best patient care available. Access to ORs with leaders in every specialty is highly limited, involves lengthy scheduling, and requires significant out-of-pocket travel expenses. The GIBLIB VR app is the first technology solution to perfectly emulate a complete OR environment with 360-degree VR content of both filmed and live stream surgeries, which allows medical students and practicing physicians to access the content at any time. Read more on GIBLIB: First VR app and accredited degree program based on virtual live and recorded operating room experiences…

Posted in Presence in the News | Leave a comment

Call: “Unifying Human Computer Interaction and Artificial Intelligence” issue of Human-Computer Interaction

Call for Papers:

Special Issue of Human-Computer Interaction:
“Unifying Human Computer Interaction and Artificial Intelligence”
http://showhow.fxpal.com/hcij/publicInfo/cfp_hciai.pdf

Special Issue Editors:
Munmun De Choudhury, Min Kyung Lee, David A. Shamma, and Haiyi Zhu

Timeline:
March 20, 2019: Proposals Due (1,000 words)
June 15, 2019: Full Papers Due

MOTIVATION

Over the past decade, artificial intelligence (AI) has increasingly been deployed across many domains such as transportation, retail, criminal justice, finance and health. But these very domains that AI is aiming to revolutionize may also be where human implications are the most momentous. The potential negative effects of AI on society, whether amplifying human biases or the perils of automation, cannot be ignored and as a result such topics are increasingly discussed in scholarly and popular press contexts. As the New York Times notes: “[…] if we want [AI] to play a positive role in tomorrow’s world, it must be guided by human concerns.”

However, simply introducing human guidance or human sensitivity into AI is not going to be enough to realize AI’s full potential or to prevent its unintended consequences. AI is increasingly being incorporated into technology design, including technologies of deep interest to researchers and practitioners in human computer interaction (HCI). While most AI-based approaches offer promising methods for tackling real-world problems, many of the technologies they enable have been developed in isolation, without appropriate involvement of the human stakeholders who use these systems and who are the most affected by them. Human involvement in AI system design, development, and evaluation is critical to ensure that AI-based systems are practical, with their outputs being meaningful and relatable to those who use them. Moreover, human activities and behaviors are deeply contextual, complex, nuanced, and laden with subjectivity; aspects which may cause current AI-based approaches to fail as they cannot adequately be addressed by simply adding more data. As a result, to ensure the success of future AI approaches, we must incorporate new complementary human-centered insights. These include stakeholders’ demands, beliefs, values, expectations, and preferences-attributes that constitute a focal point of HCI research-and which need to be a part of the development of these AI-based technologies.

The same issues also give rise to important new methodological questions. For instance, how can existing HCI methodology incorporate AI methods and data to develop intelligent systems to improve the human condition? What are the best ways to bridge the gap between machines and humans while designing technologies? How can AI enhance the human experience in interactive technologies; and further could it help define new styles of interaction? How will conventional evaluation techniques in HCI need to be modified in contexts where AI is a core technology component? What existing research methods might be most compatible with AI approaches? And, what will be involved in training the next generation of HCI researchers who want to work at the intersection with AI? Of course the concepts of “design”, “interaction”, and “evaluation” continue to be interpreted by different HCI researchers and practitioners in many related but non-identical ways. Nonetheless, how the potential synergy between AI and HCI will influence these interpretations remains an open but pertinent question.

Naturally, conversations about the relationship between HCI and AI are not new. Shneiderman and Maes (1997) discussed if AI should be a primary metaphor in the human interface to computers. Similarly, Grudin (2009) described alternating cycles in which one approach flourished, while the other suffered a “winter”, characterized by a period of reduced funding, and academic and popular interest. And more than a decade ago, Winograd (2006) argued about the strengths and limitations, as well as the relevance of rationalistic and design approaches offered by AI and HCI respectively, when applied to “messy” human problems. While the landscape of both AI and HCI research has significantly evolved since these early conversations, and researchers have begun to be more vocal about the need for a stronger “marriage” between HCI and AI, nevertheless the competing philosophies and research styles of the two fields, the current context, both academic and societal, demands renewed attention to unifying HCI and AI.

This special issue aims to be a step forward in this regard. We hope to revive and extend prior attempts to bridge HCI and AI, given the burgeoning promise and traction AI has invited recently in tackling challenging human problems. In doing so, we seek to engage both HCI and AI researchers contributing theoretical, empirical, systems, or design papers that aim to unify these two perspectives. We want to bring together research that spans this wide set of issues to help integrate the different parts of this emerging space. By doing so, we aim to begin a constructive dialog to bridge the gap via original research.

TOPICS

Submissions should address key questions in unifying AI and HCI. The following questions are intended to be inspiring, not limiting:

  • How can we bridge the fundamental mismatch between human-styles of interpretation, reasoning, and feedback and the machine’s statistical optimization for data with high-dimensionality?
  • How can we incorporate human insights—including stakeholders’ demands, beliefs, values, expectations, and preferences—into the development of AI technologies?
  • How can we predict the societal consequences of AI system deployment?
  • How can we systematically evaluate the social, psychological, and economic impacts of AI technologies?
  • How can we train our next-generation developers and designers to create AI system in a human-centered manner?
  • How does AI change how we design and prototype new HCI systems and applications?
  • How should AI interactions be designed to help end users understand AI and make better decisions?
  • What HCI methods can we use to address AI’s limitations?
  • What design methods and prototyping tools can help us create novel AI applications and services?
  • How might existing human-centric methods help increase algorithmic transparency and explainability?
  • Where can AI help HCI in testing, evaluation, and User Experience?

Read more on Call: “Unifying Human Computer Interaction and Artificial Intelligence” issue of Human-Computer Interaction…

Posted in Calls | Leave a comment

Spherical display brings virtual collaboration closer to reality

[A new fish tank virtual reality (FTVR) display provides spatial and social presence for multiple users; most press coverage draws on this press release from the University of British Columbia (via EurekAlert!), which includes a 0:37 minute video. Follow the links at the end below for more information. –Matthew]

[Image: Credit: Clare Kiernan, UBC. Source: UBC on Flickr]

Spherical display brings virtual collaboration closer to reality

‘Crystal ball’ supports two or more players working in VR

University of British Columbia
Public release: 19 February 2019

Virtual reality can often make a user feel isolated from the world, with only computer-generated characters for company. But researchers at the University of British Columbia and University of Saskatchewan think they may have found a way to encourage a more sociable virtual reality.

The researchers have developed a ball-shaped VR display that supports up to two users at a time, using advanced calibration and graphics rendering techniques that produce a complete, distortion-free 3D image even when viewed from multiple angles.

Most spherical VR displays in the market are capable of showing a correct image only from a single viewpoint, said lead researcher Sidney Fels, an electrical and computer engineering professor at UBC.

“When you look at our globe, the 3D illusion is rich and correct from any angle,” explained Fels. “This allows two users to use the display to do some sort of collaborative task or enjoy a multiplayer game, while being in the same space. It’s one of the very first spherical VR systems with this capability.” Read more on Spherical display brings virtual collaboration closer to reality…

Posted in Presence in the News | Leave a comment

Call: First IEEE International conference on Humanized Computing and Communication (HCC 2019)

First IEEE International conference on
Humanized Computing and Communication (HCC 2019)
September 25-27, 2019
Laguna Hills, California
https://www.humanizedcomputing.org/

Submission deadline: July 1, 2019

Artificial Intelligence (AI) is concerned with computing technologies that allow machines to move, see, hear, talk, think, learn, behave, and connect like humans. Humanized Computing and Communication (HCC) address the ability of a computer to mimic a human in perception, conversation, behavior, and networking. The huge potential of HCC represents an exciting future for individuals and businesses. In addition, business-business, business-human, and human-human may be interconnected in a revolutionary way to stimulate tremendous amount of interesting activities.

The First IEEE International Conference on Humanized Computing and Communication (HCC 2019) is an international forum for academia and industries to exchange visions and ideas in the state of the art and practice of HCC, as well as to identify the emerging topics and define the future of HCC.

TOPICS OF INTEREST include, but are not limited to:

  • Cognitive AI including machine vision and natural language processing
  • Conversational AI
  • Visual AI
  • Expressions and emotions
  • Models of human communication and interactions
  • Models of human and social behaviors
  • Communicating agents
  • Social agents
  • Interactions between visual, conversational, behavioral, and social AI
  • Artificial consciousness
  • Robotic intelligence
  • Human-robot and multi-robot communication

The conference proceedings will be published by the IEEE Xplore® and/or IEEE Computer Society Press digital library. Distinguished quality papers presented at the conference will be selected for the best paper/poster awards and for publication in internationally renowned journals (SCI, EI, and/or Scoups indexed). Read more on Call: First IEEE International conference on Humanized Computing and Communication (HCC 2019)…

Posted in Calls | Leave a comment

The artist is not present: How Marina Abramovic used mixed reality to create a hyper-realistic virtual performance

[Marina Abramović is using mixed reality technology to transform her performance art into what sounds like an interesting presence experience. This story from Artnet describes the project, a review in iNews provides more details, and the press release is available from the Serpentine Galleries. –Matthew]

The Artist Is Not Present: How Marina Abramović Used Mixed Reality to Create a Hyper-realistic Virtual Performance

The artist is using cutting-edge technology to create her new project at the Serpentine Galleries in London.

Naomi Rea
February 18, 2019

Marina Abramović is perhaps most famous for “The Artist Is Present,” her landmark 2010 exhibition at New York’s Museum of Modern Art in which visitors sat across from her in the museum’s atrium for 700 hours.

For her latest project, which kicks off at London’s Serpentine Galleries on February 19, the artist will appear again in a gallery space in a red dress. But this time, she won’t actually be there.

Instead, viewers will encounter her likeness through mixed reality, a new technology that allows audiences to see Abramović perform as though she is in the room with them with the aid of special goggles. (To avoid any confusion, the museum’s website states unequivocally: “Please note this is a digital experience in Mixed Reality. The artist is not present.”)

According to the museum, the 19 minute-long work, titled The Life, represents the first, large-scale performance to use mixed reality anywhere in the world. Read more on The artist is not present: How Marina Abramovic used mixed reality to create a hyper-realistic virtual performance…

Posted in Presence in the News | Leave a comment

Call: Adaptive and Personalized Privacy and Security (APPS 2019) – ACM UMAP Workshop

CALL FOR PAPERS

ACM UMAP Workshop – Adaptive and Personalized Privacy and Security (APPS 2019)
The 1st International Workshop on Adaptive and Personalized Privacy and Security
in conjunction with the 27th ACM Conference on User Modeling, Adaptation and Personalization (ACM UMAP 2019)
Larnaca, Cyprus
June 09-12, 2019

Workshop Website: http://appsworkshop.cs.ucy.ac.cy
Paper Submissions: https://easychair.org/conferences/?conf=apps2019

IMPORTANT DATES

Papers submission deadline: March 13, 2019 (23:59 AoE time)
Notification: March 26, 2019
Camera-ready papers: April 03, 2019

MOTIVATION & GOALS

Millions of users across different continents and countries are daily engaged with privacy and security tasks which are indispensable in modern information systems and services. Such tasks are commonly related to user authentication, human interaction proofs (e.g., captcha), privacy and security pop-up dialogs, setting privacy and security features within online user profiles, etc. Recent privacy and security incidents of famous online services have once more underpinned the necessity towards further investigating and improving current approaches and practices related to the design of efficient and effective privacy and security. In order to achieve this objective, one possible direction is related to providing adaptive and personalized characteristics to privacy- and security-related user tasks, given the diversity of the user characteristics (like cultural, cognitive, age, habits), the technology (like standalone, mobile, mixed-virtual-augmented reality, wearables) and interaction contexts of use (like being on the move, social settings, spatial limitations). Hence, adaptive and personalized privacy and security implies the ability of an interactive system or service to support its end-users, who are engaged in privacy- and/or security-related tasks, based on user models which describe in a holistic way what constitutes the user’s physical, technological and interaction context in which computation takes place.

APPS 2019 aims to bring together researchers and practitioners working on diverse topics related to understanding and improving the usability of privacy and security software and systems, by applying user modeling, adaptation and personalization principles. The workshop will address the following objectives:

  • increase our understanding and knowledge on supporting usable privacy and security interaction design through novel user modeling mechanisms and adaptive user interfaces;
  • discuss methods and techniques for understanding user attitudes and perceptions towards privacy and security issues in various application areas;
  • identify human-centered models for the design, development and evaluation of adaptive and personalized privacy and security systems;
  • discuss methods for evaluating the impact and added value of adaptation and personalization in privacy and security systems.

TOPICS OF INTEREST

Topics of interest include, but are not limited to:

  • Adaptation and personalization approaches in usable privacy and security
  • Effects of human factors (e.g., cognition, personality, etc.) in privacy and security systems
  • Novel user interaction concepts and user interfaces for achieving usable security
  • Cultural diversity in usable privacy and security
  • Context-aware privacy and security
  • Adaptive usable security in various domains such as healthcare, IoT, automotive
  • Trust perceived in patient-centric healthcare systems
  • Perceived security and usability in patient-centric healthcare systems
  • Adaptive user authentication policies
  • Novel approaches to the design and evaluation of usable security systems
  • Lessons learned from the deployment and use of usable privacy and security features
  • Ethical considerations in adaptive and personalized privacy and security

Read more on Call: Adaptive and Personalized Privacy and Security (APPS 2019) – ACM UMAP Workshop…

Posted in Calls | Leave a comment

Promise and peril: OpenAI researchers create “deepfakes for text”

[This detailed and balanced story from The Verge explains the potential – both positive and dangerous – of a new advancement in artificial intelligence that has received a lot of press coverage. See the original version for a different image and many examples of the new algorithm in action, and see the OpenAI blog for more information. –Matthew]

OpenAI’s New Multitalented AI Writes, Translates, and Slanders

A step forward in AI text-generation that also spells trouble

By James Vincent
February 14, 2019

OpenAI’s researchers knew they were on to something when their language modeling program wrote a convincing essay on a topic they disagreed with. They’d been testing the new AI system by feeding it text prompts, getting it to complete made-up sentences and paragraphs. Then, says David Luan, VP of engineering at the Californian lab, they had the idea of asking it to argue a point they thought was counterintuitive. In this case: why recycling is bad for the world.

“And it wrote this really competent, really well-reasoned essay,” Luan tells The Verge. “This was something you could have submitted to the US SAT and get a good score on.”

Luan and his colleagues stress that this particular essay was a bit of a fluke. “To be clear, that only happens a small fraction of the time,” says OpenAI research director Dario Amodei. But it demonstrates the raw potential of their program, the latest in a new breed of text-generation algorithms that herald a revolution in the computer-written world.

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

You can see examples of GPT-2’s skills [in the original story]. In each screenshot, the underlined text was generated by the algorithm in response to the sentence (or sentences) before it.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs.

“[GPT-2] has no other external input, and no prior understanding of what language is, or how it works,” Howard tells The Verge. “Yet it can complete extremely complex series of words, including summarizing an article, translating languages, and much more.”

But as is usually the case with technological developments, these advances could also lead to potential harms. In a world where information warfare is increasingly prevalent and where nations deploy bots on social media in attempts to sway elections and sow discord, the idea of AI programs that spout unceasing but cogent nonsense is unsettling.

For that reason, OpenAI is treading cautiously with the unveiling of GPT-2. Unlike most significant research milestones in AI, the lab won’t be sharing the dataset it used for training the algorithm or all of the code it runs on (though it has given temporary access to the algorithm to a number of media publications, including The Verge). Read more on Promise and peril: OpenAI researchers create “deepfakes for text”…

Posted in Presence in the News | Leave a comment

Call: CHI PLAY 2019, 6th ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play

CALL FOR PARTICIPATION

CHI PLAY 2019
6th ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play
Barcelona, Spain
October 22-25, 2019
https://chiplay.acm.org/2019/ | @acmchiplay | #chiplay19

Submission deadlines:

  • April 9, 2019:  Full papers (4-10 pages)
  • May 2, 2019:  Workshop and Course Proposals
  • July 5, 2019:  Rapid Communications Papers, Doctoral Consortium, Student Game Competition, Interactivity, Works-in-Progress, and Workshop Position Papers

CHI PLAY is the international and interdisciplinary conference, sponsored by ACM SIGCHI, for researchers and professionals across all areas of play, games, and human-computer interaction (HCI). We call this area ‘player-computer interaction’. The goal of the CHI PLAY conference is to highlight and foster discussion on high-quality research in games and HCI as a foundation for the future of digital play. To this end, the conference blends academic research papers, interactive play demos, and industry insights. Full paper acceptance rate is typically below 30%.

SUBMISSIONS

As a SIGCHI-sponsored conference, CHI PLAY welcomes contributions that further an understanding of the player experience, as well as contributions on novel designs or implementations of player-computer interactions, including, but not limited to, the following:

  • Playful interactions and new game mechanics
  • Innovative implementation techniques that affect player experiences
  • Studies of applied games and player experiences (e.g., games and play for health, wellbeing, and learning)
  • Accessible and inclusive design and play experience
  • Advances in game user research and game evaluation methods
  • Psychology of players and typologies of games and players
  • Gamification, persuasive games, and motivational design
  • Virtual and augmented reality in games and play
  • Novel controls, input or display technologies for games and play
  • Tools for game creation
  • Innovations to advance the work of game designers and developers
  • Game analytics and novel visualizations of player experiences
  • Developer experiences and studies of developers
  • Industry case studies

Although CHI PLAY welcomes contributions on the effects of various technologies, software, or algorithms on player experience, technical contributions without clear indications of the impact on players or developers are not within the scope of CHI PLAY. The conference invites submissions including full papers, workshop and course proposals, interactive demos, work-in-progress papers, and Rapid Communications papers. Additionally, students are invited to submit to the student game competition and the doctoral consortium. Read more on Call: CHI PLAY 2019, 6th ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play…

Posted in Calls | Leave a comment

Friendly nurse or nightmare-inducing machine? How culture programs our taste in robots.

[This Washington Post story uses vivid examples to highlight cultural differences in how people experience medium-as-social-actor presence with robots that provide different sets of social cues. See the original story for two short videos. –Matthew]

[Image: Robots wearing nurse uniforms carry medical documents Wednesday at Mongkutwattana General Hospital in Bangkok. Credit: Athit Perawongmetha/Reuters).]

Friendly nurse or nightmare-inducing machine? How culture programs our taste in robots.

This is Thailand’s idea of an attractive robot. Americans might be terrified.

By Peter Holley
February 7, 2019

Slowly and silently, they glide across the floor wearing bright yellow dresses that look as though they were plucked from a haunted 1920s boarding school.

Beneath shoulder-length brown wigs, two blazing red eyes — each one massive and ghoulish — glare from behind a darkened pane of transparent plastic like a demonic predator lurking in the dark.

No, you haven’t encountered some Mothman-like terror entombed inside a department store mannequin, the byproduct of a twisted, futuristic fever dream. You’ve merely stepped inside Mongkutwattana General Hospital in Bangkok, where a team of robot nurses has been unleashed to make life easier.

Their job: ferrying documents between eight stations inside the health-care facility, a job that used to be carried out busy human nurses, hospital director Reintong Nanna told Newsflare last year.

“These robotic nurses help to improve the efficiency and performance of working in the hospital,” he said. “They are not being used to reduce the number of employees.”

The trio of unsmiling machines — which can be programmed to speak both Thai and English — have been named Nan, Nee and Nim, according to the news outlet. Nanna said they move by following a magnetic strip that winds across the hospital floor, and can travel several miles each day.

Because they reduce human error, he added, the hospital plans to increase their workload to include moving equipment and preparing drug dosages.

Humanoid robots are taking a more active role in caring for the sick and elderly in Asia, but don’t expect to see similar machines roaming the halls of U.S. hospitals any time soon. That’s because robot design is often culture-specific, with some countries excitedly deploying machines that would probably terrify sickly patients in other countries, according to Cory Kidd, founder and CEO of Catalia Health, which has designed its own smiling, doe-eyed “personal healthcare robot” named “Mabu.”

His reaction to the glowing red eyes currently staring down Thai patients: “They’re creepy.”

“Robot aesthetics are culturally dependent,” he said, noting that the Thai hospital bots were designed in China. “If we had these nurses in a U.S. hospital, that would not work. They wouldn’t survive a day. But they might be received completely differently in Thai hospitals.”

Online surveys have revealed contrasts in how robots are perceived by country as well. While both Europeans and Japanese respondents agree that robots should be used to assist with difficult and repetitive tasks, the nature of acceptable tasks — and the degree of intimacy involved — differs widely, according to a team of international researchers. Read more on Friendly nurse or nightmare-inducing machine? How culture programs our taste in robots….

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php