Featured Undergraduate Alumnus

Joshua Pawlicki

By Ted Pawlicki

Featured Graduate Alumnus

Mayur Thakur

By Lane A. Hemaspaandra

Mayur Thakur received his B. Tech. from IIT Delhi in 1999 and his PhD from URCS in 2004. As a grad student, he spent his summers interning at Los Alamos National Laboratory, Microsoft Research, and even a startup. After being a tenure-track CS faculty member, he worked on Google Search, was a Managing Director and global head of surveillance engineering at Goldman Sachs, and was the Chief Data Officer for the healthcare data technology company H1. He currently is a Managing Director at the Bank of America. Mayur has published in a wide range of areas, including complexity theory, cryptography, data mining, discrete dynamical systems, graphs and networks, quantum computing, and recommender systems.


Mayur was interviewed for Multicast by URCS professor Lane A. Hemaspaandra, who was Mayur’s PhD advisor.

Featured Article

The Era of Large Language Models

By Hangfeng He

Introduction:


With the advent of ChatGPT, a cutting-edge model unveiled by OpenAI in November 2022, discussions in the natural language processing (NLP) community have been dominated by the rise of Large Language Models (LLMs). ChatGPT has not just transformed the dynamics within the field of AI; it has seamlessly integrated itself into our daily lives. From refining pieces of writing to providing answers to a myriad of questions, its applications are profound. This article will delve into the genesis of LLMs, highlighting the great opportunities they present, as well as the challenges and concerns they introduce.


The origin of LLMs:


The rudimentary concept of a language model has been in existence for several decades. However, the evolution of LLMs is a relatively recent phenomenon. The cornerstone architecture behind LLMs is the Transformer, introduced by Google Brain in 2017. The Transformer leaned heavily on the parallel multi-head attention mechanism. In terms of training efficiency it outpaced recurrent neural networks (RNNs), the dominant architecture at that time. Therefore, it quickly became the foundation for successive LLMs.


OpenAI’s Generative Pre-trained framework (later called GPT-1) marked the debut of Transformer-based LLMs. GPT-1 introduced a novel fine-tuning technique to apply pre-trained language models to various tasks. In the same year, Google introduced BERT, Bidirectional Encoder Representations from Transformers, using a similar approach. The breakthrough here was that both models moved away from building task-specific architectures from the ground up. Instead, they fine-tune the same structure for multiple tasks, drastically reducing the training time.


Post-BERT, the research community avidly explored BERT’s variants and their potential. OpenAI continued to advance their GPT series, launching GPT-2 in 2019. Despite its sizable increment in parameters and training dataset, it did not eclipse BERT in key NLP benchmarks. Yet, OpenAI persisted, releasing GPT-3 the next year. This behemoth, with a staggering 175 billion parameters, heralded the era of zero-shot and few-shot learning in NLP. Although still lagging behind fine-tuned BERT in certain tasks, GPT-3 eliminated the requirement of training or fine-tuning on downstream tasks.


Riding on the wave of success and the incorporation of high-quality human feedback, OpenAI released ChatGPT using the GPT-3.5 model. Evidently, ChatGPT’s prowess exceeded BERT’s, ushering NLP into an epoch where pre-trained LLMs took center stage, guided only by thoughtfully-crafted prompts. OpenAI’s next innovation was a multimodal iteration, GPT-4, which merged text and visual inputs to generate text-based outputs. It’s imperative to note that ChatGPT’s triumph was a confluence of scaling both the model and the training data (as seen in GPT-1, GPT-2, and GPT-3) and harvesting rich, real-world human feedback (in the veins of GPT-3.5 and GPT-4).


In sum, LLMs have catalyzed a paradigm shift in NLP’s landscape. We’ve transitioned from crafting and training dedicated models on thousands of annotated examples to employing a singular pre-trained LLM across diverse tasks without necessitating any task-specific training or annotations.


The shift of NLP paradigm

Opportunities:


The remarkable accomplishments of LLMs bring a multitude of new opportunities. In this section, we spotlight three avenues ripe with potential within the realm of LLMs.


Tool-Augmented LLMs: Tools emerge as instrumental in magnifying LLM capabilities. Echoing this trend, platforms like ChatGPT have rolled out support for plugins, morphing into a novel app store of sorts. Some plugins harness LLMs for practical applications, like flight searches, while others aim to hone specific LLM abilities, such as arithmetic calculations. We envision ample room for improvement in the synergy between tools and LLMs.


Multimodal Learning: LLMs have underscored the potential of unsupervised pre-training of foundation models across diverse modalities, including images and audio. However, representing equivalent information in such modalities tends to be more bit-intensive than in text, limiting the amount of data available for pre-training within the same computational budget. Given the complexities associated with multi-modal pre-training, LLMs might hold an edge over models specifically tailored for other modalities. Consequently, a promising direction is how to better leverage LLMs and specialized models in other modalities for multimodal tasks. While there have been preliminary ventures into this territory, further exploration is needed, especially considering the unique data attributes of each modality to craft more robust hybrid systems.


LLMs for Science: LLMs can also be applied to other disciplines. For example, lawyers can use LLMs to automate legal document writing. Administrative staff would find LLMs helpful in sifting through voluminous documents, and educators could utilize them as instructional aids. Moreover, given that LLMs have gleaned vast swaths of online information, they possess a knowledge reservoir broader than any individual. They could be seen as collaborators, working alongside humans, to augment and expedite scientific research. A crucial endeavor in this direction involves harmonizing domain-specific human expertise with LLM capabilities. This approach demands innovative interaction methodologies between humans and LLMs.


Concerns:


While LLMs boast significant advancements, they bring along a slew of concerns. This section discusses some of the most pressing concerns.


1. Social and Ethical Concerns:


Privacy Issues: One major concern about LLMs from OpenAI is privacy. Since they are not open-source, access is primarily through their APIs. Although OpenAI asserts that users retain data control when using APIs, apprehensions persist, especially since users must share data with OpenAI to harness the LLMs. There is also the risk of LLMs inadvertently divulging sensitive data or producing content that infringes on copyrights. These concerns emphasize the necessity for strategies and regulations to safeguard privacy and copyright.


Bias Issues: Like other machine learning models, LLMs can propagate and even amplify biases in their training data. Past solutions have tried to utilize alignment or refined prompts to align LLM outputs with human values. However, these measures often fall short of human expectations. Addressing bias might require more proactive interventions during data collection and unsupervised pre-training phases.


2. Superintelligence Concerns:


LLMs spur concerns about human roles becoming redundant. But the primary intent behind LLMs is to augment human capabilities, not to supplant them. In addition, some researchers are worried about LLMs potentially surpassing human intelligence in the foreseeable future. This scenario demands approaches to guide and regulate AI systems, especially if they evolve beyond our cognitive capabilities. Recognizing the gravity of these concerns, OpenAI has initiated a specialized superalignment team, dedicating a significant portion of computational resources to address the challenges linked to super-intelligent systems.


Conclusion:


Epitomized by models like ChatGPT, LLMs have profoundly reshaped the landscape of the field of NLP, the broader AI community and even people’s everyday routines. The ascendancy of LLMs is not solely attributed to breakthroughs like the Transformer architecture. It is also a testament to OpenAI’s persistence in the evolution of the GPT series.


The advent of LLMs unlocks a plethora of opportunities, enabling tasks previously deemed impossible. However, as with any monumental progress, there comes a duty to navigate the emergent challenges with prudence and foresight.


Acknowledgement: The writing of this article was polished with the assistance of ChatGPT.

HCI Update

By Zhen Bai

2023 Commencement Awards

Click Here for More on UR’s 2023 Commencement

Undergraduate and Graduate Highlights

Faculty and Staff Highlights

PhDs Conferred 2022-2023

2023 Honors Research

We had five honors students in the Undergraduate Class of 2023.


Aayush Poudel: Compressing Ray Trajectory Mapping using Bezier Curves (Honors)


Henry Lin: Tracking Words (Honors)


Yurong Liu: Sampling Over Union of Joins (Highest Honors)


Draco Xu: Network Construction on Historical and Real-Time Data (Highest Honors)


Enting Zhou: Unsupervised Arousal Valence Estimation from Speech and Corresponding Discrete Emotion (High Honors)

Click here to see their projects

Alumni Updates

Alumni Updates (continued)


Liudvikas Bukys, MS ’86

Liudvikas Bukys is now working at a startup, Reframe Technologies, building a product to transform how we get our work done with computers. He splits his time between beautiful Keuka Lake and warm sunny Clearwater, Florida.


Jim Heliotis, PhD ’84

I am now retired from RIT with the “Professor Emeritus” title. I hope to still stay somewhat active in the CS education area. I ran a workshop at a regional conference in April and I hope to do the same thing at a larger conference next winter. Other than that I’m just spending a bunch of time around the homestead doing projects that I literally put off for decades!


Kailash Joshi, MS ’17

I am delighted to share that last year (29th Aug) I embarked on a journey with my dream company, Microsoft! The journey began with the initial aspiration of “One day,” followed by meticulous planning, thorough preparation, a series of interviews, experiencing rejections, finally receiving an acceptance, going through the onboarding process, and culminating in the much-awaited “Day 1.” It has been quite an exhilarating ride!


Ronald P. Loui, PhD ’88

Probably teaching a seminar at Case in the Fall tentatively titled AI GOOD AND EVIL. Got the green light to write an article on proximal cause and machine learning based product liability for LAW OF AI 2nd ed. If you’re really curious, awkscripts.com/oldweb/loui.html is a recent summary of my 62 years (lots of photos, but no runaway js, so the browser can handle it). I’d rather be working on defense tech, but I dislike commercial airlines and I like living near Cleveland. I guess the big news is a new timeline for the Torah based on Amorite history and biannual shanah counting, but you’ll have to click on the link or read my Facebook friend posts to see it.


Jim Muller, PhD ’94

I’m dividing my time between three companies as co-founder or co-owner. Didero Games, launched this year, is a subscription rental club for physical Nintendo games. We’ve been running a similar club, the Hoefnagel Wooden Jigsaw Puzzle Club, since 2020. Both use AI-driven peer-to-peer shipping. I’m also co-owner of Artifact Puzzles, designing and manufacturing wooden jigsaw puzzles since 2009.


Danny Sabbah, PhD ’82

Along with co-authors, I have written a book which is now going through the publishing process and will be available in the fall (early Nov). It is available for pre-order on Amazon. The name is: The Heart of Innovation. I am in the process of also setting up a venture fund based on the principles in the book. We introduce the equivalent of behavioral thinking (much like Behavioral Economics) into the early evaluation of proposed innovations. This filters out cognitive biases and introduces a concept called “authentic demand” into the conversation around innovation. We develop a method for extracting and understanding “authentic demand.” The book is in 2 parts; One part is examples from our collective history of accidental innovations. The next is an introduction to the method for “deliberate innovation.” The preface is written by Arvind Krishna who is the current CEO of IBM.


Robert Schudy, PhD ’82

Much has happened these last few years:

I’m now Emeritus from Boston University.

I learned of my advisor Dana Ballard’s death. Dana did so much to help me, including providing detailed corrections for many drafts of my thesis. I remember his guidance well, and wonder if I will ever live up to what he prepared me for.

I’ve written two books on online education, with my Boston University colleagues Anatoly Temkin and Dan Hillman. Our first book is titled “Best Practices for Administering Online Programs.” It’s the academic administration title in the Routledge Best Practices in Online Teaching and Learning series. When we finished that book Routledge asked us to write a book on teaching online, so we wrote “Winning Online Instruction: A Q&A for Higher Education Faculty.” The section titles of this book are questions that faculty frequently ask, and the text answers those questions. Both books have been well received.

My wife Liz Watson and I spend our summers in Lincoln Mass, but we purchased a small condo on an intracoastal-connected lake in Hallandale Beach Florida, and we spend our winters there. We drive back and forth, and stopped to see my classmate Bryan Lyles in North Carolina on the way north this year. We have a Crealock 34 cruising sailboat, which we keep at the dock at our condo. About five years ago we joined the Gulfstream Sailing Club. I’m now on the Board of the Club and of the affiliated Gulfstream Sailing Foundation. Our main mission is teaching children how to sail, and we’ve taught thousands. I’m also co-captain of the ocean racing committee and dockmaster at our condo. We lead modest yacht races many Saturdays, and enjoy cruises in the Florida Keys.

One of my genuinely enriching experiences is joining a Bahamian Anglican church in Hallandale, and singing in their choir. Bahamians are renowned amongst sailors as the friendliest people on earth, and it’s true. They’ve welcomed me warmly, even though I’m very different and usually the only pale person in the church. I want to really understand and feel what it’s like to be black in America. After six months I’m beginning to understand that it’s much more difficult than most people think. I also sing in the Episcopal Church in Lincoln, which is very different. Many Lincoln parishioners are wealthy, and few are poor. In Hallandale there is a free breakfast after church, prepared by parishioners in the parish hall, so that everyone has at least one good meal, and there are tables with donated food to help parishioners make ends meet. In Lincoln we provided everyone N95 masks, used technology to sing together safely, and didn’t miss a stride during the pandemic. This didn’t happen in Hallandale, and many people, including most of the children, no longer come to church. This is a huge loss for the kids, because a lot of what church is about is teaching kids about ethics, history, communications, and getting along with others.


Chunqiang Tang, PhD ’04

After leaving IBM Research in 2013, I joined Facebook, which has now changed its name to Meta Platforms. I have remained with the company since then and was promoted to the position of Senior Director. During the past few years, I have been working in the broad area of cloud computing in Meta’s massive private cloud. Although my work at Meta is primarily centered around production systems, managing millions of servers and serving billions of users, I have continued to publish our cutting-edge production work as research papers in esteemed conferences such as SOSP, OSDI, ISCA, and ASPLOS. Notably, we have received multiple accolades for our work, including the ISCA ’23 Best Paper Award for “Contiguitas: The Pursuit of Physical Memory Contiguity in Datacenters,” the ASPLOS ’22 Best Paper Award for “TMO:Transparent Memory Offloading in Datacenters,” and being selected for the IEEE Micro Top Picks 2023 with our ASPLOS ’22 paper titled “IOCost: Block IO Control for Containers in Datacenters.” Overall, I find great satisfaction in contributing to industry work that not only impacts billions of people but also advances the state of the art in research.


Mohammed J. Zaki, PhD ’98

Honored to be elected a Fellow of the American Association for Advancement of Science “For distinguished contributions to the fields of data mining and machine learning, and for service to the academic community.”



Share your outcomes and updates with the department!


ugalumni@cs.rochester.edu, gradalumni@cs.rochester.edu


And connect with us in the URCS Alumni Group


https://www.linkedin.com/groups/12655649/

Join the Mailing List Using the Online Form

Alternate link for mailing list: https://forms.office.com/r/45sPAyLKxh

News Bulletins

Hackathons Pushing Students’ Creativity and Their Global Involvement

Sidhant Bendre, Sara Klinkbeil

Sidhant Bendre ’23 won one of the Grand Prizes at Stanford TreeHacks 2023 - specifically “The Moonshot Prize” which is awarded to “The craziest, most out-of-this-world project.” TreeHacks is one of the largest hackathons in the nation. It attracts over 1700+ hackers that fly in from all over the globe. In groups of four or fewer, they hack for 36 hours straight. They are all attempting to build the future, to create the next big thing. TreeHacks is a notoriously selective program with a 7.5% acceptance rate.

“The project my team and I built allows people to control their drone by just giving it an objective to accomplish in plain English! Using LLMs, I created a tool that writes its own drone programs to perform a variety of complex tasks such as long-running tasks, multi-modal, etc., without needing to write a line of code themselves. For example, you could tell the drone to ‘find the bottle’ or ‘find the person in a red shirt’ and it will take off, survey the room for the target, and once found, will fly to the target.”

The Untapped Potential of Computing in Tackling Climate Change

“Hajim School researchers explore the potential for using computing to help promote eco-friendly lifestyles. Computer Science PhD student Adiba Proma and Associate Professor Ehsan Hoque authored an invited paper for NAE Perspectives along with Robert Wachter, the Holly Smith Distinguished Professor and Chair of the Department of Medicine at University of California, San Francisco.”

Read More

Patrick Chen ’25 Wins Best Student Paper Award at 2023 IEEE International Conference on Digital Health

“Patrick (Jingyuan) Chen, a rising CS junior working in Professor Jiebo Luo’s research group, has won the Best Student Paper Award after giving a presentation in person in Chicago last week at the IEEE International Conference on Digital Health (ICDH). He was the only undergraduate student in the invitation-only Student Research Competition. Yuan Yao, a first-year PhD student in Professor Luo’s research group, is the second author of the paper with collaborators from URMC (Maiken Nedergaard) and Copenhagen University in Denmark.”

Read More

Jiebo Luo Selected as a Fellow of the National Academy of Inventors

Read More

Alumnus Caleb Wohn ’22 Receives CSGrad4US Graduate Fellowship

“Caleb joined the ROC HCI lab as a sophomore in Fall 2019 and had been actively involved in multiple research projects including the SOPHIE Project, which is a virtual agent designed to prepare doctors for end-of-life conversations. He has also contributed to building a knowledge graph for climate change and a platform to nudge people to select eco-friendly products.”

Read More

Spring 2023 Data Set Grants

Steven Oufan Hai ’24 and Alexander Martin ’24 receive Data Set Grants from River Campus Libraries.

Read More

Qingjian Shi Wins People’s Choice Award in Annual Art of Science Competition

This year’s People’s Choice Award went to Computer Science student Qingjian Shi ’26 for “Robot’s Expression of Individuality.” Shi describes the work as a “retro-futuristic robot expressing itself and what it feels while contrasting mechanical and fluidity of nature in Monet style.”

Read More






This email was sent to nquattro@cs.rochester.edu
why did I get this?    unsubscribe from this list    update subscription preferences
Computer Science Department • University of Rochester · PO Box 270226 · 2513 Wegmans Hall · Rochester, NY 14627-0226 · USA