Talks by visitors to the Department

2017 talks

  • Information-Centric Networking in Wireless and Mobile Networks


    Speaker:

    Prof. Torsten Braun, Head of Communication and Distributed Systems, University of Bern

    Date:2017-12-15
    Time:10:00:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    In this talk we discuss challenges and opportunities of information-centric networking (ICN) in wireless and mobile networks. First, we describe NetCod, an approach using network coding in multi-path network environments to support efficient and reliable video streaming. We investigate the use of ICN in vehicular networks and present a solution for the Reverse Path Partitioning problem, which occurs when Data messages can not be sent along the same path as established by Interest messages. We discuss a virtualized architecture for cellular networks, where edge network caches can be proactively filled with data dependent on users' mobility. This approach, however, achieves good performance when users' mobility can be predicted accurately. Thus, we investigated proactive caching mechanisms, which are more robust to mobility prediction mistakes, by caching in nodes somewhat closer to the core of the network. Finally, we discuss our recent work on mobility prediction in cellular networks exploiting the combination of various machine learning mechanisms.


    Bio:

    Prof. Torsten Braun, Head of Communication and Distributed  Systems, University of Bern
    http://www.cds.unibe.ch/about_us/team/prof_dr_braun_torsten



  • Estimating Matching Size in Graph Streams


    Speaker:

    Prof. Sanjeev Khanna (Univ. of Pennsylvania)

    Date:2017-12-15
    Time:15:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    We study the problem of estimating the maximum matching size in sublinear space when the edges of a graph are revealed in a streaming manner. In particular, we consider the following question: Is estimating the maximum matching size a distinctly easier task than finding an approximate matching in the streaming model of computation? Previously known results do not provide any separation in the space requirements for these two tasks.

    We establish new upper and lower bound results for tradeoff between space requirement and desired accuracy in estimating maximum matching size. Our results show that while the problem of matching size estimation is provably easier than the problem of finding an approximate matching, the space complexity of the two problems starts to converge together as the accuracy desired in the computation approaches near-optimality. A well-known connection between matching size estimation and computing rank of Tutte matrices allows us to further carry our lower bound results to the problem of estimating rank of a matrix in the streaming model, and we show that almost quadratic space is necessary to obtain a good approximation to matrix rank.

    Based on a joint work with Sepehr Assadi and Yang Li.


  • Managing Systems Memory


    Speaker:

    Prof. K. Gopinath from IISc Bangalore

    Date:2017-11-22
    Time:11:30:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    Memory management is known to be an important problem since the dawn of the computing era. With larger memories and many specialized subsystems, the problem of managing memory is becoming more complex. In this talk, we give a perspective on the problem including some results from recent research on mem mgmt interactions with RCU and huge pages.


  • Low-Latency Analytics on Colossal Data Streams with SummaryStore


    Speaker:

    Nitin Agrawal from Samsung research

    Date:2017-11-20
    Time:15:30:00 (IST)
    Venue:SIT Committee Room, #113
    Abstract:

    In this talk I will present SummaryStore, an approximate time–series storage system designed for real-time analytics and machine learning at unprecedented cost savings. SummaryStore is capable of storing large volumes of data streams (~petabyte on a single node) as highly compact summaries through a novel abstraction of time-decayed summaries; it can provide accurate query responses, and tightly-bound error estimates, on data approximated by factors of 10-100x. This work was recently presented at SOSP '17.


    Bio:

    Nitin Agrawal is a systems researcher currently at Samsung Research. His interests lie broadly in distributed, mobile, and storage systems. He has received best paper awards at USENIX FAST 2009, 2011, 2012 and his work on storage for smartphones was featured in the MIT Technology Review, Slashdot, and other popular press. He is currently serving as the program committee chair for FAST 2018 and previously chaired HotStorage 2016. He obtained a PhD from Wisconsin in 2009.



  • "Challenges and Opportunities in Wearable Systems"


    Speaker:

    Prof. David F. Kotz, Champion International Professor,Department of Computer Science,Dartmouth College,Hanover, NH.

    Date:2017-11-10
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    Wearable systems offer great promise in application domains as varied as healthcare, eldercare, augmented work, education, athletics, entertainment, parenting, travel, and personal productivity. In particular, I highlight two of our own wearable-mHealth projects. First, I introduce the Amulet, a wristband with the capability of running multiple mHealth apps with strong security and long battery life. Second, I introduce the Auracle, which aims to be an unobtrusive earpiece that can detect eating incidents in support of eating-behavior studies and interventions. I leverage these examples to outline some of the broader opportunities and challenges faced in developing wearable technology. In particular, we must address security and privacy challenges as we design and develop wearable devices, systems, and applications if we hope to see this technology widely accepted and adopted. Strong (and usable) security mechanisms are essential for safety-critical applications; meaningful privacy protections are essential for systems that accompany us through our private and public life. By raising these concerns today, I seek to ensure the wearable systems of tomorrow will have strong and usable security and privacy properties.


    Bio:

    David Kotz is the Champion International Professor in the Department of Computer Science at Dartmouth College. He served as Associate Dean of the Faculty for the Sciences for six years and as the Executive Director of the Institute for Security Technology Studies for four years. He served on the US Healthcare IT Policy Committee from 2013-17. His research interests include security and privacy, pervasive computing for healthcare, and wireless networks. He has published over 130 refereed journal and conference papers and obtained over m in grant funding. He is PI of a m grant from the NSF Secure and Trustworthy Cyberspace program and leads a five-university team investigating Trustworthy Health & Wellness technology (see thaw.org). He is an IEEE Fellow, a Senior Member of the ACM, a 2008 Fulbright Fellow to India, and an elected member of Phi Beta Kappa.
    After receiving his A.B. in Computer Science and Physics from Dartmouth in 1986, he completed his Ph.D in Computer Science from Duke University in 1991 and returned to Dartmouth to join the faculty. For more information see http://www.cs.dartmouth.edu/~dfk/.

    photo:
    http://www.cs.dartmouth.edu/~dfk/photos/Kotz2013/
    http://www.cs.dartmouth.edu/~dfk/photos/Kotz2013/Kotz-20130624-DSC03617-face.jpg



  • Make in India: Massively scalable WiFi Networks"


    Speaker:

    Dr. Pravin Bhagwat, CTO and co-founder Mojo Networks www.mojonetworks.com

    Date:2017-10-30
    Time:14:00:00 (IST)
    Venue:101, Bharti Building
    Abstract:

    Mojo Networks is revolutionizing WiFi through the power of cloud and open standards to make WiFi networks remarkably easy to manage  and reliable at massive scale and incredibly cost effective. The company started as a 'Make in India' experiment fourteen years ago. It has now grown into a provider of Cloud Managed Wi-Fi for the global market. In partnership with Reliance Jio, we are building the world's largest public WiFi network. ~100,000 access points have been deployed throughout the country and this network is rapidly growing with a target to reach 1 million access points by next year.  I'll present three key architecture innovations – (1) Cloud management, (2) Tri-radio WiFi Access Points and (3) Open programmable APIs –which offer ground breaking advantages over controller-based WLAN architecture prevalent in large Enterprises, Higher EDU campuses today. I'll also talk about the new developments in Open Compute Project (OCP) that are pushing all technology eco-system players towards open  standards, more interoperability and disaggregation of software & hardware.


    Bio:

    Pravin Bhagwat is an entrepreneur and a wireless security researcher. He is Co-founder and CTO of Mojo Networks -- a provider of highly scalable & secure cloud-managed Wi-Fi solutions. Mojo Networks has built a world class engineering team in Pune which is successfully launching 'first-of-its-kind' Cloud managed Wi-Fi products in the global market.

    Prior to starting his entrepreneurial career, Pravin was a lead researcher at AT&T Research and IBM Thomas J. Watson Research Center, New York where he was a member of MobileIP, WiFi and Bluetooth research teams. He also served as a visiting faculty member at computer science department, IIT Kanpur. Pravin also has 27 patents to his credit.

    For the past five years, Pravin has also been experimenting with a couple of non-tech ideas, such as leading a community level green/recycling initiative, jumpstarting an eco-restoration project near Pune and making a feature film length documentary. Pravin has B.Tech. in Computer Science from IIT Kanpur, India and MS/PhD in computer science from the University of Maryland, College Park, USA.



  • Wadhwani Institute for Artificial Intelligence: Independent Not-For-Profit Research Institute for Social Good.


    Speaker:

    Dr. P. Anandan , CEO Wadhwani Institute for Artificial Intelligence

    Date:2017-10-27
    Time:14:30:00 (IST)
    Venue:101, Bharti Building
    Abstract:

    WIAI is a new philanthropic effort by the Wadhwani brothers, Romesh Wadhwani and Sunil Wadhwani, entrepreneurs of Indian origin living in the US. Our mission is to do research in AI, ML, Data Science and related areas that will address societal challenges in a variety of domains including (but not limited to) Education, Health, Infrastructure, and Agriculture. We plan to build a strong team of researchers dedicated to this mission and to work with a variety of partners – from Academia, NGOS, Government and International Agencies, Industry and others. We plan to take an approach that is strongly driven by societal use cases keeping in mind the feasibility of deployment at scale as the ultimate measure of impact.  The Institute will be headquartered in India and will begin its work by addressing opportunities for research on societal impact in India.


    Bio:

    CEO, Wadhwani Institute for Artificial Intelligence



  • SpanFS - Building a Distributed Filesystem with Strict Consistency.


    Speaker:

    Apurv Gupta

    Date:2017-10-26
    Time:16:15:00 (IST)
    Venue:404, Bharti building
    Abstract:

    This talk will cover the key challenges of marrying strict consistency required by POSIX while building a scale out filesystem that can span thousands of nodes. Traditionally, the cloud scale storage systems gave up some properties in order to achieve scalability (e.g., GFS/S3). The enterprise stacks (built on SMB/NFS) could not benefit from these new protocols. Cohesity's SpanFS solves these problems. We'll also touch upon newer filesystems like BTRFS and the similarities it shares with SpanFS.


    Bio:

    Apurv is the Chief Architect at Cohesity. His career spans Bell Labs, several startups, including GreenBorder (acquired by Google), and over 8 years at Google. He was also the Co-founder of ArkinNet (acquired by VMWare). He is passionate about building highly scalable systems - query engines, data warehousing systems and most recently, filesystems. Apurv holds a B.Tech. from Indian Institute of Technology, Kanpur.

     



  • Pfaffian Orientations and Conformal Minors


    Speaker:

    Nishad Kothari, U. of Waterloo

    Date:2017-09-28
    Time:16:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    Valiant (1979) showed that unless P = NP, there is no polynomial-time algorithm to compute the number of perfect matchings of a given graph even if the input graph is bipartite. Earlier, the physicist Kasteleyn (1963) introduced the notion of a special type of orientation of a graph, and he referred to graphs that admit such an orientation as Pfaffian graphs. Kasteleyn showed that the number of perfect matchings is easy to compute if the input graph is Pfaffian, and he also proved that every planar graph is Pfaffian. The complete bipartite graph K3;3 is the smallest graph that is not Pfaffian. In general, the problem of deciding whether a given graph is Pfaffian is not known to be polynomial-time solvable. Special types of minors, known as conformal minors, play a key role in the theory of Pfaffian orientations. In particular, a graph is Pfaffian if and only if each of its conformal minors is Pfaffian. It was shown by Little (1975) that a bipartite graph G is Pfaffian if and only if G does not contain K3;3 as a conformal minor (or, in other words, if and only if G is K3;3-free); this places the problem of deciding whether a bipartite graph is Pfaan in co-NP. Several years later, a structural characterization of K3;3-free bipartite graphs was
    obtained by McCuaig, Robertson, Seymour and Thomas (STOC 1997), and this led to a polynomial-time algorithm for deciding whether a given bipartite graph is Pfaffian. No such algorithm is known for general graphs. Norine and Thomas (2008) showed that, unlike the bipartite case, it is not possible to characterize all Pfaffian graphs by excluding a nite number of graphs as conformal minors.
    In light of everything that has been done so far, it would be interesting to even identify rich classes of Pfaffian graphs (that are nonplanar and nonbipartite). Inspired by a theorem of Lovasz (1983), we took on the task of characterizing graphs
    that do not contain K4 as a conformal minor | that is, K4-free graphs. In a joint work with U. S. R. Murty (Journal of Graph Theory 2016), we provided a structural characterization of planar K4-free graphs. The problem of characterizing nonplanar K4-free graphs is much harder, and we have evidence to believe that it is related to the problem of recognizing Pfaffian graphs. In particular, we conjecture that every graph that is K4-free and K3;3-free is also Pfaffian.


    Bio:

    Nishad Kothari is a Postdoctoral Fellow at the Department of Combinatorics and Optimization, University of Waterloo. His research is in structural and algorithmic graph theory with a focus on Matching Theory and Pfaffian Orientations. Nishad completed his PhD at Waterloo under the supervision of Joseph Cheriyan and U. S. R. Murty, and before that he completed a Masters degree in Computer Science at Georgia Tech. October 2017 onwards, Nishad will be a Postdoctoral Fellow at the Faculty of Mathematics, University of Vienna, where he plans to continue his work in Pfaffian orientations with Ilse Fischer.



  • On Demystifying CNF-XOR Formulas


    Speaker:

    Dr. Kuldeep Meel, professor at National University of Singapore.

    Date:2017-09-13
    Time:12:00:00 (IST)
    Venue:SIT , Seminar Room
    Abstract:

    The Boolean-Satisfaction Problem (SAT) is fundamental in many diverse areas such as artificial intelligence, formal verification, and biology. A large body of work explains the runtime performance of modern SAT solvers through analysis of random k-CNF formulas. Recent universal-hashing based approaches to the problems of sampling and counting crucially depend on the runtime performance of specialized SAT solvers on formulas expressed as the conjunction of both k-CNF constraints and variable-width XOR constraints (known as CNF-XOR formulas), but random CNF-XOR formulas are unexplored in prior work.
    In this work, we present the first study of random CNF-XOR formulas. We show empirical evidence of a surprising phase-transition in the satisfiability of random CNF-XOR formulas that follows a linear trade-off between k-CNF and XOR constraints. Moreover, we prove that a phase-transition in the satisfiability of random CNF-XOR formulas exists for k=2 and (when the number of k-CNF constraints is small) for k>2. We empirically demonstrate that a state-of-the-art SAT solver scales exponentially on random CNF-XOR formulas across a wide range of XOR-clause densities, peaking around the empirical phase-transition location. Finally, we prove that the solution space of random CNF-XOR formulas 'shatters' at all nonzero XOR-clause densities into well-separated components.


    Bio:

    Kuldeep Meel is starting as Assistant Professor at National University of Singapore. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of a 2016-17 IBM PhD Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship and the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters thesis award.



  • Microsoft Research Fellows Program


    Speaker:

    Dr. Kalika Bali , Microsoft Research Bangalore.

    Date:2017-09-04
    Time:16:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    The Research Fellows program at Microsoft Research India (MSR India) exposes bright, young minds in India to world-class research and the state-of-the-art in technology. The program prepares students for careers in research, engineering, as well as entrepreneurship. MSR India has some of the best researchers pushing the frontiers of computer science and technology. Research Fellows have contributed to all aspects of the research lifecycle, spanning ideation, implementation, evaluation, and impact.

    Please note: For the next cycle of RF applications candidates should have completed BS/BE/BTech or MS/ME/MTech in Computer Science or related areas, graduating by summer 2018.


    Bio:

    Kalika Bali is a Researcher at Microsoft Research Lab India. She has worked for nearly  two decades in the area of Speech and Language Technology, especially for resource poor languages.  The primary focus of her current research is on how Natural Language systems can help Human-Machine Interaction more   socio-culturally relevant and appropriate.



  • New Techniques for Power-Efficient CPU/CPU-GPU Processors


    Speaker:

    Dr. Kapil Dev, NVIDIA, Santa Clara, USA

    Date:2017-08-31
    Time:15:45:00 (IST)
    Venue:SIT , Seminar Room
    Abstract:

    After the end of Dennard scaling, multi-core CPU and CPU-GPU processors have emerged as primary type of computing devices to satisfy the high performance demands of modern applications. Performance of these devices is typically limited by the power density and thermal design power (TDP) of the chip. Specifically, power efficiency (performance/Watt) is the key factor in improving the performance of modern computing devices being used in mobile, desktop and server computers.

    In this talk, I will highlight couple of new techniques to build power-efficient CPU-GPU processors. First, I will present a framework to perform fine-grained power mapping and modeling of multi-core processors. The framework uses a high-end infrared camera to capture thermal images of the processor to generate accurate power models, which can potentially be used for better run-time power management of the processor. In the second half of my talk, I will present a machine learning-based technique to map workloads on appropriate device of CPU-GPU processors under different static and time-varying workload/system conditions.Overall, the proposed techniques could be used to implement an efficient power management unitfor CPU/CPU-GPU processors.


    Bio:

    Dr. Kapil Dev is a Senior Engineer at Nvidia, Santa Clara. He received his PhD from Brown University, Providence, Rhode Island in 2016. He has published 10 papers in the peer-reviewed international conferences and journals. He has also co-authored 4 US patents. Further, he is an active reviewer of multiple journals, including the IEEE Transactions on Very Large Scale Integration Systems and the Elsevier integration- The VLSI Journal. He has also served in the programcommittee of different conferences such as DAC, HPC, and ICCAD. His research interests are in the field of building energy and power-efficient computing systems.  He is particularly interested in applying machine learning techniques to improve design and run-time power methodologies for CPU-GPU processors.



  • Internet of Things: Information Analytics, Energy-Efficiency and Hardware Security


    Speaker:

    Prof. Keshab K. Parhi (http://www.ece.umn.edu/~parhi)

    Date:2017-08-25
    Time:11:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    This talk will address VLSI architectures for emerging internet-of- things. Machine learning and information analytics are important components in all these things. Reducing energy consumption and silicon area are also critical for these things. Enhancing security and preventing piracy are also of critical concern. Almost all things should have embedded classifiers to make decisions on data. Thus, reducing energy consumption of features and classifiers is important. First part of the talk will present energy reduction approaches for a medical internet-of-thing for monitoring EEG and predicting seizures. In the second part of the talk, I will addresses hardware security and present approaches to designing circuits that cannot be easily reverse engineered and cannot be pirated. To this end, authentication and obfuscation approaches will be presented.


    Bio:

    Keshab K. Parhi received the B.Tech. degree from the Indian Institute of Technology (IIT), Kharagpur, in 1982, the M.S.E.E. degree from the University of Pennsylvania, Philadelphia, in 1984, and the Ph.D. degree from the University of California, Berkeley, in 1988. He has been with the University of Minnesota, Minneapolis, since 1988, where he is currently Distinguished McKnight University Professor and Edgar F. Johnson Professor in the Department of Electrical and Computer Engineering. He has published over 600 papers, is the inventor of 29 patents, and has authored the textbook VLSI Digital Signal Processing Systems (Wiley, 1999) and coedited the reference book Digital Signal Processing for Multimedia Systems (Marcel Dekker, 1999). Dr. Parhi is widely recognized for his work on high-level transformations of iterative data-flow computations, for developing a formal theory of computing for design of digital signal processing systems, and for his contributions to multi-gigabit Ethernet systems on copper and fiber and for backplanes. His current research addresses VLSI architecture design of signal processing, communications and biomedical systems, error control coders and cryptography architectures, high-speed transceivers, stochastic computing, hardware security, and molecular computing.  He is also currently working on intelligent classification of biomedical signals and images, for applications such as seizure prediction and detection, schizophrenia classification, biomarkers for mental disorders, brain connectivity, and diabetic retinopathy screening.  Dr. Parhi is the recipient of numerous awards including the 2017 Mac Van Valkenburg award and the 2012 Charles A. Desoer Technical Achievement award from the IEEE Circuits and Systems Society, the 2004 F. E. Terman award from the American Society of Engineering Education, the 2003 IEEE Kiyo Tomiyasu Technical Field Award, the 2001 IEEE W. R. G. Baker prize paper award, and a Golden Jubilee medal from the IEEE Circuits and Systems Society in 2000. He was elected a Fellow of IEEE in 1996. He served as the Editor-in-Chief of the IEEE Trans. Circuits and Systems, Part I during 2004-2005 and as an elected member of the Board of Governors of the IEEE Circuits and Systems society from 2005 to 2007.



  • The Era of Accelerators


    Speaker:

    Viktor K. Prasanna, University of Southern California

    Date:2017-08-02
    Time:15:00:00 (IST)
    Venue:SIT Building, Seminar room
    Abstract:

    FPGAs have matured over the years and are being used along with multi-core and emerging memory technologies to realize advanced platforms to accelerate a variety of applications. This talk will review the promise of reconfigurable computing leading up to current trends in accelerators. We will illustrate FPGA-based parallel architectures and algorithms for a variety of data analytics kernels in advanced networking, streaming graph processing and machine learning. While demonstrating algorithm-architecture co-design methodology to realize high performance accelerators for deep packet inspection, regular expression matching, packet classification, traffic classification, heavy hitter detection, etc., we demonstrate the role of modeling and algorithmic optimizations to develop highly efficient IP cores. We also show high throughput and energy efficient accelerator designs for a class of graph analytics and machine learning kernels. Our approach is based on high level abstractions of the FPGA platforms and design of efficient data structures, algorithms and mapping methodologies. We illustrate the performance improvements such accelerators offer and demonstrate the suitability of FPGAs for these computations. We conclude by identifying opportunities and challenges in exploiting emerging heterogeneous architectures composed of multi-core processors, FPGAs, GPUs and coherent memory.


    Bio:

    Viktor K. Prasanna (ceng.usc.edu/~prasanna) is Charles Lee Powell Chair in Engineering in the Ming Hsieh Department of Electrical Engineering and Professor of Computer Science at the University of Southern California. He is the director of the Center for Energy Informatics at USC and leads the FPGA (fpga.usc.edu) and Data Science Labs (dslab.usc.edu). His research interests include parallel and distributed systems including networked sensor systems, embedded systems, configurable architectures and high performance computing. He served as the Editor-in-Chief of the IEEE Transactions on Computers during 2003-06 and is currently the Editor-in-Chief of the Journal of Parallel and Distributed Computing. Prasanna was the founding Chair of the IEEE Computer Society Technical Committee on Parallel Processing. He is the Steering Co-chair of the IEEE International Parallel and Distributed Processing Symposium (www.ipdps.org) and the Steering Chair of the IEEE International Conference on High Performance Computing (www.hipc.org). He is a Fellow of the IEEE, the ACM and the American Association for Advancement of Science (AAAS). He is a recipient of 2009 Outstanding Engineering Alumnus Award from the Pennsylvania State University. He received the 2015 W. Wallace McDowell award from the IEEE Computer Society for his contributions to reconfigurable computing.



  • Novel Markets on the Internet: Models and Algorithms


    Speaker:

    Prof Vijay Vazirani

    Date:2017-07-26
    Time:11:30:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    The Internet has revolutionized markets; perhaps the most impressive of these in terms of size, impact and novelty of mechanisms is Google's Adwords market. Google's initial mechanism for this market did not effectively serve small advertisers, who form the fat tail of the budget distribution, thereby losing a significant fraction of potential revenue. Building on the theory of online matching algorithms, we gave an optimal algorithm for the Adwords market and a formal framework that has proved useful in thinking about budgeted auctions more generally. These ideas have been widely adopted by Google and other search engine companies.

    The rapidly growing cloud computing market, which is projected to eclipse even the Adwords market, provides another opportunity for novel algorithms and practical impact.I will describe a first attempt in this direction: an equilibrium-based market model for pricing and allocating resources, and a polynomial time algorithm for computing them without relying on the proverbial ``invisible hand of the market’'.


    Bio:

    Vijay Vazirani got his Bachelor's degree from MIT in 1979, his Ph.D. from U.C. Berkeley in 1983, and is currently Professor of Computer Science at Georgia Tech.His research career has been centered around the design of algorithms, together with work on complexity theory, cryptography, coding theory, and game theory.He is best known for his work on efficient algorithms for the classical maximum matching problem (1980's), fundamental complexity-theoretic results obtained using randomization (1980's), approximation algorithms for basic NP-hard optimization problems (1990's), and efficient algorithms for computing market equilibria (current).In 2001 he published what is widely viewed as the definitive book on approximation algorithms.This book has been translated into Japanese, French and Polish. In 2005 he initiated work on a comprehensive volume on algorithmic game theory; the co-edited volume appeared in 2007.



  • Faster test execution by improving cache locality.


    Speaker:

    Ajitha Rajan, University of Edinburgh

    Date:2017-07-25
    Time:12:00:00 (IST)
    Venue:SIT Building, # 001
    Abstract:

    With the growing complexity of software, the number of test cases needed for effective validation is extremely large. Executing these large test suites is expensive, both in terms of time and energy. Cache misses are known to be one of the main factors contributing to execution time of a software. Cache misses are reduced by increasing the locality of memory references. In this talk, I will discuss a novel approach to improve instruction locality across test case runs. Our approach measures the distance between test case runs (number of different instructions). We then permute the test cases for execution so that the distance between neighbouring test cases is minimised. We hypothesise that test cases executed in this new order for improved instruction locality will reduce time consumed.


    Bio:

    Ajitha Rajan is a an assistant professor at the University of Edinburgh. Prior to that she was a post doc at University of Oxford and Laboratoire d'Informatique de Grenoble. She graduated with a PhD from the University of Minnesota. Her research is in the area of Software engineering, particularly software testing. She has served on program committees of major international software engineering conferences like ASE, ICST and FMCAD.

     



  • Geometric Optimization Problems


    Speaker:

    Dr. Saurabh Ray

    Date:2017-07-21
    Time:12:00:00 (IST)
    Venue:SIT Building, #006
    Abstract:

    We consider classical combinatorial optimization problems (e.g. independent set, set cover, dominating set etc) in a geometric setting where the set system is induced by geometric objects. Even though these problems remain NP-hard, much better approximation algorithms are often possible due to the special structure in geometrically induced problems.

    In this talk we give an overview of broad techniques and results in this area including some of our recent results. We also present a number of open problems.​


    Bio:

    Dr. Saurabh Ray is currently on the faculty of Computer Science in NYU-Abu Dhabi.
    Dr. Ray completed his undergraduate studies in Computer Science and Engineering in Indian Institute of Technology Guwahati. He received a Ph.D. in Computer Science from Universität des Saarlandes in Germany in 2009 and since then has done postdoctoral research work at the Max-Plank Institute for Informatics in Germany, École Polytechnique Fédérale de Lausanne in Switzerland and Ben Gurion University in Israel. As a postdoc, Saurabh has spent time in both Computer Science and Mathematics departments. His research interests are in discrete and computational geometry, extremal combinatorics, and approximation algorithms. He is particularly excited about novel applications of mathematics in Computer Science.



  • Is Interaction Necessary for Distributed Private Learning?


    Speaker:

    Jalaj Upadhyay, Penn State U.

    Date:2017-06-19
    Time:10:00:00 (IST)
    Venue:Room # 425, Bharti Building
    Abstract:

    Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual's device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users' hands.

    For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol.

    We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction.

    We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.

    Based on a joint work with Adam Smith and Abhradeep Thakurta.


  • Energy Efficiency and Reliability in Many-Core Embedded Systems


    Speaker:

    Amit Kumar Singh, Univ. of Southampton

    Date:2017-04-20
    Time:11:30:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Modern embedded systems, e.g., smart phones, PDAs and tablet PCs often need to support multiple embedded software applications and this number is increasing faster than ever. In order to satisfy the high performance requirement of various applications, the reliance on multi/many-core based systems is increasing. These systems require optimization for required performance metrics in order to fulfill the end-user demands. Energy consumption and reliability are two important metrics as their optimization leads to increased battery life and better user experience, respectively. To fulfill these requirements, efficient run-time mapping methodologies are required that should be able to map and execute applications efficiently on the many-core system resources.

    This talk will provide an overview of run-time resource management activities that have been carried within PRiME project in order to optimize energy consumption and reliability. Additionally, more details about some recently proposed run-time management methodologies by highlighting the potential bottlenecks of existing ones. These methodologies exploit the application domain knowledge to perform compute intensive design space exploration at design-time such that only light-weight computations need to be performed at run-time. The results obtained by the proposed and relevant existing methodologies will be presented to show their advantages over existing approaches.


    Bio:

    Dr. Amit Kumar Singh received the B. Tech degree in Electronics Engineering from Indian Institute of Technology (Indian School of Mines), Dhanbad, India, in 2006, and the Ph.D. degree from the School of Computer Engineering, Nanyang Technological University (NTU), Singapore, in 2013. He was with HCL Technologies, India for year and half before starting his PhD at NTU, Singapore, in 2008. He worked as a post-doctoral researcher at National University of Singapore (NUS) from 2012 to 2014 and at University of York, UK from 2014 to 2016. Currently, he is working as senior research fellow at University of Southampton, UK. His current research interests include system level design-time and run-time optimizations of 2D and 3D multi-core systems with focus on performance, energy, temperature, and reliability. Dr. Singh was the receipt of ISORC 2016 Best Paper Award, PDP 2015 Best Paper Award, HiPEAC Paper Award, and GLSVLSI 2014 Best Paper Candidate. He has served on the TPC of IEEE/ACM conferences like ISED, MES, NoCArc and ESTIMedia.



  • "Data-Driven Techniques Towards Performance Optimization in Wireless Networks"


    Speaker:

    Ayon Chakraborty, SUNY Stonybrook

    Date:2017-03-22
    Time:12:00:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    In this talk, I will introduce two emerging issues in wireless networks in the two extreme ends of the network protocol stack. The first deals with managing quality-of-experience (QoE) in using mobile applications and the second deals with large-scale monitoring of the radio frequency spectrum. Essentially, for both the directions, we show how data-driven techniques can be leveraged to model performance of complicated real world systems and apply necessary optimization.
    Guaranteeing good QoE for the user is a challenge in mobile apps due to their diverse resource requirements and the resource-constrained, variable nature of wireless networks and mobile devices. A key question we ask is: how to select the best network for a given application, or how to optimize the overall user experience for multiple users and applications in a given network? To address such issues, we develop a network-side system called ExBox (Experience Middlebox) that uses a new experiential capacity-based formulation to aid admission control and network selection. ExBox can handle multiple different types of applications in the network and dynamic network conditions.
    The second issue relates to the problem of RF spectrum limitations in the face of exponentiating demands for wireless network capacity. This has promoted new spectrum sharing regimes where a diverse set of wireless technologies, including broadcast television, radar and various forms of wireless communications, co-exist in the same spectrum band while respecting specific spectrum rights. Our key question is: how effectively can we monitor available spectrum opportunities across time and space? One key technology in this space is spectrum databases that store spectrum availability information. We augment existing spectrum database technologies to improve their accuracies in a cost-effective fashion. The idea is to supplement existing model-based techniques with actual spectrum measurements. We also contribute towards making the spectrum measurements themselves scalable by developing techniques to perform spectrum sensing on mobile devices. These efforts culminate building a spectrum database system called SpecSense that can schedule and collect measurements from a distributed system of spectrum sensors in order to estimate spatio-temporal patterns in spectrum availability.


    Bio:

    Ayon Chakraborty is a final year Ph.D. candidate at State University of New York - Stony Brook working with Prof. Samir Das. He is broadly interested in mobile systems and data driven performance analysis/optimization of such systems functioning in constrained environments. Many papers resulting from his research appeared in top venues including ACM SIGCOMM CoNEXT, IMC, MOBICOM and IEE INFOCOM. His work on characterizing quality-of-experience (QoE) among mobile virtual network operators (MVNOs) was nominated for the Best Paper Award at ACM IMC 2014. Before joining Stony Brook, Ayon completed his B.E. from Jadavpur University (Calcutta) in 2011, where he won the Department Gold Medal. Ayon is also a recipient of DAAD scholarship.



  • Fusing Physical and Social Sensors for Situation Awareness


    Speaker:

    Mohan Kankanhalli, School of Computing, National University of Singapore

    Date:2017-03-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    With the spread of physical sensors and social sensors, we are living in a world of big sensor data. Though they generate heterogeneous data, they often provide complementary information. Combining these two types of sensors together enables sensor-enhanced social media analysis which can lead to a better understanding of dynamically occurring situations.

    We utilize event related information detected from physical sensors to filter and then mine the geo-located social media data, to obtain high-level semantic information. Specifically, we apply a suite of visual concept detectors on video cameras to generate "camera tweets" and develop a novel multi-layer tweeting cameras framework. We fuse "camera tweets" and social media tweets via a unified matrix factorization model. We perform matrix factorization on a spatio-temporal situation matrix formed from physical sensors signals, incorporating the surrounding social content to exploit a set of latent topics that can provide explanations for the concept signal strengths. We have tested our method on large scale real data including PSI stations data, traffic CCTV camera images and
    tweets for situation prediction as well as for filtering noise from events of diverse situations. The experimental results show that the proposed approach is effective.


    Bio:

    Mohan Kankanhalli is Provost's Chair Professor of Computer Science at the National University of Singapore (NUS). He is also the Dean of NUS School of Computing. Before becoming the Dean in July 2016, he was the NUS Vice Provost (Graduate Education) during 2014-2016 and Associate Provost during 2011-2013. Mohan obtained his BTech from IIT Kharagpur and MS & PhD from the Rensselaer Polytechnic Institute.

    His current research interests are in Multimedia Computing, Information Security, Image/Video Processing and Social Media Analysis. He directs the SeSaMe (Sensor-enhanced Social Media) Centre which does fundamental exploration of social cyber-physical systems which has applications in social sensing, sensor analytics and smart systems. He is on the editorial boards of several journals including the ACM Transactions on Multimedia, Springer Multimedia Systems Journal, Pattern Recognition Journal and Springer Multimedia Tools & Applications Journal. He is a Fellow of IEEE.



  • Urban sustainability, preserving privacy


    Speaker:

    Rijurekha Sen, MPI-SWS 

    Date:2017-03-06
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Developing regions are moving towards rapid urbanization, with tremendous growth in urban population who are attracted by the economic prospects in cities. The growth and management of urban infrastructure has to match this population growth, to make our cities efficient, sustainable and more livable. In this talk, I will give a brief overview of different projects where I have used Computer Science methodologies for applications in urban sustainability. I will describe one project in detail where we monitored traffic queues at low latency for automated intersection management,though the traffic was non-laned and disorderly as is common in developing countries. 

    The second part of my talk will focus on another system for privacy preserving sensing, where image capture devices can automatically sense privacy preferences of people in its vicinity and edit photographs to match these preferences. This aspect of security and privacy is important to consider, as gathering information ubiquitously for better urban management exposes citizens to risks of their privacy violation. 


    Bio:

    Rijurekha Sen is a Humboldt post-doctoral researcher at Max-Planck Institute for Software Systems. She works in the area of cyber-physical systems and uses inter-disciplinary Computer Science methods for applications at the intersection of information technology and society. She completed her PhD at IIT Bombay and has interned at different industrial labs and startups like Microsoft Research India and SMU Livelabs. Her dissertation on automated road traffic monitoring in developing regions was awarded the ACM India Doctoral Dissertation Award, 2014. 



  • Robust Image Feature Description, Matching and Applications


    Speaker:

    Shiv Ram Dubey, IIIT Chitoor

    Date:2017-03-03
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Most applications of computer vision such as image correspondence, image retrieval, object recognition, texture recognition, face and facial expression recognition, 3D reconstruction, etc. required matching of two images. In order to match two images, or in other words to find out the similarity/dissimilarity between two images, some description of image is required because the matching of raw intensity values of two images will be more time consuming and it will be affected by any small variations in its inherent properties such as brightness, orientation, scaling, etc. Thus, the images can be matched with its description derived from the basic properties of the image such as color, texture, shape, etc. This description is called as the feature descriptor/signature of the image. The main objectives of any descriptor are 1) to capture the discriminative information of the image, 2) to provide the invariance towards the geometric and photometric changes, and 3) to reduce the dimension of the feature to be matched.

    We have proposed an interleaved intensity order based local descriptor (IOLD) for region based image matching under various geometric and photometric transformation conditions. We have also proposed four grayscale local image descriptors namely local diagonal extrema pattern (LDEP), local bit-plane decoded pattern (LBDP), local bit-plane dissimilarity pattern (LBDISP), and local wavelet pattern (LWP) for biomedical image retrieval in MRI and CT databases. We have reported four color based local descriptors namely local color occurrence descriptor (LCOD), rotation and scale robust hybrid descriptor (RSHD), multichannel adder based local binary pattern (maLBP), and multichannel decoder based local binary pattern (mdLBP) for natural and texture image retrieval. An illumination compensation mechanism has been also introduced as a preferred pre-processing step for brightness invariant image retrieval.


  • Adaptivity and Optimization for Physics-Based Animation


    Speaker:

    Rahul Narain, U. of Minnesota

    Date:2017-03-02
    Time:10:00:00 (IST)
    Venue:301, Bharti building
    Abstract:

    Many visually compelling natural phenomena, from the wrinkling and creasing of thin sheets to the collision-free flow of pedestrians in large crowds, are challenging to model computationally because they involve nonlinear interactions across a large number of degrees of freedom. For many applications that must be resposive to user input, such as computer animation, virtual environments, and computer-aided design, it is essential to obtain a sufficiently accurate solution under a very limited computational budget. 

    In this talk, I will present my work on efficient computational methods for solving such simulation problems. First, I will describe an adaptive remeshing technique for simulating thin elastic sheets such as cloth. Through remeshing, one can dynamically adapt the resolution of the simulation mesh as the simulation proceeds, focusing computational effort on regions containing emerging detail such as wrinkles, creases, and fractures. Second, I will discuss optimization-based simulation techniques, in which the equations of motion are reformulated in terms of energy minimization. This approach enables robust and parallel time integration of complex nonlinear systems such as pedestrian crowds and nonlinear elastic materials. These techniques help make it possible to simulate large complex systems to high fidelity with a limited computational cost. 


    Bio:

    Rahul Narain is an assistant professor in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities. His research interests lie in numerical methods for computer graphics and animation, particularly focusing on efficient numerical optimization  techniques for large-scale problems in physics-based animation and computational displays. He did his B.tech at IITD and an M.S. and a Ph.D. from the University of North Carolina at Chapel Hill advised by Prof. Ming Lin. He was a postdoc at the University of California, Berkeley working with Prof. James O'Brien. 



  • PinDrop: Networking Substrate for High-Quality Real-Time Streaming


    Speaker:

    Venkat Padmanabhan, from Microsoft Research

    Date:2017-02-21
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Applications such as Internet telephony (VoIP) and online gaming place stringent demands on network performance in terms of latency, jitter, and packet loss. The best-effort Internet often falls short of meeting these requirements. In the PinDrop project at Microsoft Research, we are developing the networking substrate for supporting high-quality real-time streaming. Our approach centers on three pillars: (1) leveraging data-driven insights, (2) exploiting network path redundancy, and (3) performing wireless-specific optimizations. After touching briefly on these, I will focus on Kwikr, which uses WiFi hints to improve bandwidth estimation for Skype calls. Kwikr uses a simple technique called ping-pair to estimate and attribute the queuing delay at the WiFi downlink, which enables more informed bandwidth adaptation than the state of the art.


    Bio:

    Venkat Padmanabhan is a Principal Researcher at Microsoft Research India, where he founded the Mobility, Networks, and Systems group in 2007. He was previously with Microsoft Research Redmond, USA for nearly 9 years, and holds a B.Tech. from IIT Delhi and an M.S. and a Ph.D. from UC Berkeley, all in Computer Science. Venkat’s research interests are broadly in networked and mobile systems. He recently received the Shanti Swarup Bhatnagar Prize and the inaugural ACM SIGMOBILE Test-of-Time paper award. He has been elected a Fellow of the Indian National Academy of Engineering (INAE), the IEEE, and the ACM. He presently serves on the SIGCOMM Industrial Liaison Board. He can be reached online at http://research.microsoft.com/~padmanab/.



  • Multi-biometric Authentication Systems


    Speaker:

    Puneet Gupta, TCS Kolkata

    Date:2017-02-20
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Authentication system aims is the urgent need of time in several inevitable domains like banking, forensics and airport security. Traditional, keys or passwords based systems can be easily spoofed, lost or stolen hence they are replaced by biometric based systems which require user characteristics for accurate and automatic authentication. Unfortunately, no single trait provides all the characteristics due to sensor noise, user factors and environmental conditions. The performance can be improved by fusing multiple traits. Such a fused system is referred as multi-modal system.

    In this talk, several unimodal systems will be discussed that are based on: (i) slap image that contains all the fingerprints of a hand; (ii) finger veins; (iii) palm dorsal veins (dorsal means the back side of the hand); and (iv) Infrared hand geometry. Eventually, fusion of these systems will be analyzed in terms of performance improvement and involved cost.


    Bio:

    Puneet Gupta is presently a scientist in the TCS innovation lab, Kolkata. He earned his PhD degree in Computer science and Engineering in  2016 from IIT Kanpur. He was advised by Prof. Phalguni Gupta. His research interest lies in image processing and computer vision, in particular biometrics. During his Ph.D, he has designed several multi-biometric systems for accurate human authentication.



  • On Programming Multiprocessors and Reviving Innovation in Hardware Design


    Speaker:

    Dr. Gagan Gupta, Microsoft Research

    Date:2017-02-17
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In this talk I will address two topics. The first examines challenges posed by parallel programming. Conventional wisdom says that we must abandon order from programs to secure performance on multiprocessors. Unfortunately the resulting nondeterminism complicates programming and using multiprocessors, in general.

    In contrast, microprocessors execute instructions in sequential programs concurrently while providing an ordered view of the execution. This "ordered" paradigm kept system use simple while improving performance, contributing to the phenomenal success of microprocessors. I explore whether an analogous approach can be applied to multiprocessors, and what its pros and cons might be.

    In the second topic, I examine why innovation has stalled in the hardware industry. Drawing inspiration from the contribution of open-source technology to the success of the software industry, I explore whether a similar approach might benefit the hardware industry.


    Bio:

    Gagan Gupta has worked in the semiconductor industry for over two decades. Presently he is a researcher at Microsoft Research. He has led engineering and marketing teams to launch commercially successful processors at LSI Logic, Huawei, and ARC International, and influenced R&D strategies (e.g., at Intel and Sandisk). Gagan has a Ph.D. in Computer Sciences from the University of Wisconsin-Madison. His work has been recognized with awards at multiple industrial and academic fora, and is regularly cited in trade publications.



  • Joint Seat Allocations in IITs and Other CFTIs : Impact of an Algorithm


    Speaker:

    Surender Baswana (CSE, IIT Kanpur)

    Date:2017-02-17
    Time:17:00:00 (IST)
    Venue:LHC - 111
    Abstract:

    Until 2014, the admissions to the Indian Institutes of Technology (IITs) were conducted separately (based on JEE Advanced ranks) from the admissions to the non-IIT Centrally Funded Technical Institutes (CFTIs) (based on JEE mains ranks). The same set of candidates were eligible to apply for a seat in each of the two sets of institutes, and several hundred candidates would indeed receive two seats from the two different sets. Each such candidate could use at most one of the seats, leaving a vacancy in the other seat; this would be noticed much later, in many cases after classes began. Such seats would either remain vacant or would be reallocated at a later stage using spot rounds organized locally which used to be inherently unfair and inefficient.

    In 2015 and 2016, a new combined seat allocation process was implemented to resolve some of these issues, especially the vacancies. The process brought all CFTIs under one umbrella for admissions. Each candidate submitted a single choice list over all available programs, and received no more than a single seat from the system, based on the choices and the ranks in the relevant merit lists. At the core of this seat allocation process lies an algorithm adapted elegantly to many business rules of admissions and engineered to handle many practical challenges.

    In this talk we shall discuss this algorithm, the challenges faced in adapting it to practice, and its positive impact in the last two years.

    Important notes :
    1. At least 80-90% of the talk will be accessible even to a non-expert.
    2. For fans and researchers in algorithms: This talk might change (positively) the way you view algorithms and data structures.
    3. This talk will be interesting for all, especially to the BTech students who joined IITD in 2015 and 2016. They might be curious to know the technology that determined the unique program for them given two merit lists and a choice list spanning two sets of
    institutes.
    4. This talk will also have a few inspirational stories about the role played by the entire IIT fraternity (a student, an alumnus, a director, and a few faculty members) in this project.


  • Exploring the Role of Context and Social Media to Enhance the User-Experience in Photography and Image Tag Recommendation


    Speaker:

    Yogish Rawat, NUS Singapore

    Date:2017-02-15
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The last decade has seen a significant improvement in the ease of capture of multimedia content. Cameras now have intelligent features, such as automatic focus, face detection, etc., to assist users in taking better photos. However, it still is a challenge to capture high-quality photographs. The complex nature of photography makes it difficult to provide real-time assistance to a user for improving the aesthetics of capture. The recent advancements in sensor technology, wireless networks, and social media provide us an opportunity to enhance the photography experience of users.

    Our work aims at providing real-time photography assistance to users by leveraging on camera sensors and social media content. We focus on two different aspects of user-experience in photography. The first focus is on camera guidance and the second is on location recommendation for photography. In addition, we also focus on exploring the role of context and user-preference in developing a deep network for image tag recommendation. In this talk, we will provide a brief overview of our work on camera guidance and mainly discuss the research on location and image-tag recommendation.

    In camera guidance, we focus on landmark photography where feedback regarding scene composition and camera parameter settings is provided to a user while a photograph is being captured. We also focus on group photography where we use ideas of spring-electric graph model and color energy to provide real-time feedback to the user about the arrangement of people, their position and relative size on the image frame. In location recommendation, we first focus on a viewpoint recommendation system that can provide real-time guidance based on the preview on user's camera, current time and user's geo-location. Next, we introduce a photography trip recommendation method which guides a user to explore any tourist location from the photography perspective. We leverage on ideas from behavioural science for a better photography experience. More specifically, we utilize the optimal foraging theory to recommend a tour for efficient exploration of a tourist location.

    Social media users like to assign tags to the captured images before sharing with others. One of the biggest advantages of adding tags to photographs is that it makes them searchable and easily discoverable by other users. However, assigning tags to the photographs when shared immediately after capture can be challenging. In such scenario, real-time tag prediction based on image content can be very useful. However, the tags assigned to an image by a user also depend on the user-context apart from the visual content. In addition, user-preference also plays an important role in predicting image tags. Motivated by this, we propose a convolutional deep neural network which integrates the visual content along with the user-preference and user-context in a unified framework for tag prediction. We observe that integrating user-preference and the context significantly improves the tag prediction accuracy.


  • Applications of Regular quantifiers in Logics


    Speaker:

    A.V. Sreejith, U. Warsaw

    Date:2017-02-10
    Time:12:00:00 (IST)
    Venue:Bharti Building 501
    Abstract:

    In this talk, we look at various extensions of linear temporal logic (LTL) and first order logic (FO). Of particular interest are extensions using modulo counting operators.  First, we give a very brief introduction to logic and automata giving special attention to linear temporal logic. LTL has found tremendous use in program verification. It though has some limitations. One such limitation is, it cannot express periodic properties.  For example, ``the bell rings every 1 hour". We introduce the modulo counting operator which address this weakness of LTL and the satisfiability and model checking problem of this extended logic.  In the second part of the talk, we address an open problem in Descriptive complexity which was posed by Howard Straubing. The modulo counting extensions of first order logic (using addition relation) are closely related to Circuit complexity. It was not known whether there are regular languages which are not definable in the modulo counting extension of first order logic. We use a combination of combinatorics and semigroup theory to show that there are regular languages not definable in this logic, thereby answering Straubing.


  • Statistically Secure Additions and Restricted Multiplication of Secret Shares


    Speaker:

    Dor Bitan (Ben-Gurion University)

    Date:2017-02-06
    Time:15:30:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    One of the most interesting research topics in cryptography is finding schemes for an efficient fully-homomorphic encryption (FHE), preferably information-theoretically secure schemes, which are not based on unproven computational hardness assumptions. The greatest breakthrough in the field of FHE schemes was made by Gentry in 2009, and since then there were some interesting developments, e.g. Boneh et al. and Barkevski and Perlman. All currently known FHE schemes have computational security and are time-wise inefficient.
    We suggest an information-theoretically secure secret sharing scheme of restricted values that supports one multiplication of secrets and additions of, practically, any number of such multiplied secrets. Our scheme has perfect security against a single curious participant attack, and statistical security against an attack of a coalition of two curious participants. The scheme can be generalized to one with a larger number of participants, while remaining perfectly secure against a single curious participant attack.
    One application of our scheme is a secure homomorphic evaluation of multi-variant quadratic functions and 2-CNF circuits.
    (This is a joint work with Shlomi Dolev and Daniel Berend)


    Bio:

    Dor Bitan is a Ph.D student at the mathematics department at Ben-Gurion University of the Negev in Israel.



  • Statistically Secure Additions and Restricted Multiplication of Secret Shares


    Speaker:

    Dor Bitan (Ben-Gurion University)

    Date:2017-02-06
    Time:15:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    One of the most interesting research topics in cryptography is finding schemes for an efficient fully-homomorphic encryption (FHE), preferably information-theoretically secure schemes, which are not based on unproven computational hardness assumptions. The greatest breakthrough in the field of FHE schemes was made by Gentry in 2009, and since then there were some interesting developments, e.g. Boneh et al. and Barkevski and Perlman. All currently known FHE schemes have computational security and are time-wise inefficient.
    We suggest an information-theoretically secure secret sharing scheme of restricted values that supports one multiplication of secrets and additions of, practically, any number of such multiplied secrets. Our scheme has perfect security against a single curious participant attack, and statistical security against an attack of a coalition of two curious participants. The scheme can be generalized to one with a larger number of participants, while remaining perfectly secure against a single curious participant attack.
    One application of our scheme is a secure homomorphic evaluation of multi-variant quadratic functions and 2-CNF circuits.
    (This is a joint work with Shlomi Dolev and Daniel Berend)


    Bio:

    Dor Bitan is a Ph.D student at the mathematics department at Ben-Gurion University of the Negev in Israel.



  • Constrained Counting and Sampling: Bridging the Gap between Theory and Practice


    Speaker:

    Kuldeep Meel, Rice University

    Date:2017-01-18
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, decision making under certainty, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints.

    In this talk, I will introduce a novel algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning over the past two decades. This has allowed us to obtain breakthrough results in constrained sampling and counting, providing a new algorithmic toolbox in design verification, machine learning, probabilistic reasoning, and the like. I will demonstrate the utility of the above techniques on various real applications including probabilistic inference, hardware verification, and our ongoing collaboration in estimating the reliability of critical infrastructure networks during natural disasters.


    Bio:

    Kuldeep Meel is a final year Ph.D. candidate at Rice University working with Prof. Moshe Vardi and Prof. Supratik Chakraborty (IITB). He obtained a B.Tech. from IIT Bombay and an M.S. from Rice in 2012 and 2014 respectively. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of the 2016-17 IBM Ph.D. Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship, and the 2013-14 Andrew Ladd Fellowship. His research won the best student paper award at the International Conference on Constraint Programming 2015. He co-won the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters Thesis Award.



  • Local Guarantees in Graph Cuts and Clustering


    Speaker:

    Neha Gupta, Stanford (B.Tech, CSE, IITD, 2015)

    Date:2017-01-11
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Min s − t Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization. Here, we are given a graph with edges labeled + or − and the goal is to produce a clustering that agrees with the labels as much as possible: + edges within clusters and − edges across clusters. The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective. We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node. This naturally gives rise to a family of basic min-max graph cut problems. A prototypical representative is Min Max s − t Cut: find an s − t cut minimizing the largest number of cut edges incident on any node. We present the following results: (1) an O(√n)-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node, (2) a remarkably simple 7-approximation for minimizing local disagreements in complete graphs

    This is joint work with Prof. Moses Charikar and Prof. Roy Schwartz


  • Orchestrating NFV Symphony


    Speaker:

    Puneet Sharma, Distinguished Technologist at Hewlett Packard Labs

    Date:2017-01-10
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Communications Service Providers are poised to embark on Cloudification stage in their Network Functions Virtualization (NFV) transformation journey after Decoupling and Virtualization stages. There is imminent need for automated resource flexing technologies for enabling realization of NFV promise to deliver elasticity and agility. This talk discusses NFV orchestration challenges and articulates some architectural choices for service orchestration and resource allocation.


    Bio:

    Puneet Sharma is Distinguished Technologist at Hewlett Packard Labs, where he conducts research on software-defined networking, network function virtualization, cloud and datacenter networks, mobility, and network monitoring. Prior to joining HP Labs, Puneet received a Ph.D. in Computer Science from the University of Southern California and a B.Tech. in Computer Science & Engineering from the Indian Institute of Technology, Delhi. He has delivered Keynotes at various forums such as NFV World Congress 2016 and IEEE LANMAN 2014. He was recipient of best paper award at IEEE NFVSDN 2015 for his work on VNF performance characterization.

    He has also contributed to various standardization efforts such as co-authoring UPnP's QoS Working Group's QoSv3 standard and the IETF RFCs on the multicast routing protocol PIM. Puneet has published 60+ research articles in various prestigious networking conferences and journals (Transactions on Networking, ACM SIGCOMM, ACM HotNets, USENIX NSDI, IEEE INFOCOM, etc.). His work on Mobile Collaborative Communities was featured in the New Scientist Magazine. He has been granted 30+ US patents.

    He is an IEEE Fellow and an ACM Distinguished Scientist.



  • Pervasive Computing and Communication Techniques for Smart Healthcare


    Speaker:

    Vaskar Roy Choudhury, IIT Roorkee

    Date:2017-01-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Pervasive computing (PvC) paradigm emerged as a successor of Distributed and Mobile computing. Given the human-centric nature of PvC, intelligent healthcare has been identified as a major application domain. Furthermore, increase in the elderly population across the globe requires researchers to develop round-the-clock remote health monitoring systems for elderly and ailing individuals (using BP, pulse sensors) and to inform caregivers in case of an emergency. We need to develop and use innovative approaches that would support the much needed transformation of healthcare from reactive and hospital-centered to preventive, proactive, evidence-based, person-centered and focused on well-being rather than disease. We have been working on various aspects of the intelligent healthcare including Body Area Network (BAN) construction for collection of vital signs, investigating appropriate networking solution for transporting large amounts of healthcare data, intelligent decision making by processing healthcare data, detecting epidemics over a large geographical area, etc. In this talk, I shall introduce an overview of our works on intelligent healthcare and associated open research challenges.



2016 talks

  • What is in a Name. An Access Based Abstraction for Points to Analysis


    Speaker:

    Vini Kanvar 

    Date:2016-12-27
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    A points-to analysis computes a sound abstraction of memory using a name-based abstraction that groups concrete memory locations using names: All stack/static locations that are accessed using the same variable name are grouped together across all execution instances of the program and so are all heap locations allocated by the same statement. Since this grouping is fixed across all program points (even in a flow-sensitive analysis), it merges pointer relationships indiscriminately making the heap abstraction significantly imprecise.

    We propose an access-based abstraction that sub-divides each name-based group into subgroups using an additional criterion of the sets of access paths reaching the locations. The intuition is that the memory locations that are both allocated and accessed alike should be treated alike and the locations that may be allocated or accessed differently should not be merged. Since the ways of accessing locations could change flow sensitively because of the assignment statements in a program, the subgroups change flow sensitively. This creates a more precise view of the memory. Theoretically, it is at least as precise as the name-based abstraction when no name-based group can be subdivided into smaller subgroups; practically it is far more precise.

    Our empirical measurements show the benefits of access-based abstraction on SPEC CPU 2006 and heap manipulating SV-COMP benchmarks. Our prototype implementation scales to 20.6 kLoC and can improve the precision even up to 99% (in terms of the average number of aliases of an access path).


    Bio:

    Vini Kanvar is a senior PhD student working with Prof. Uday Khedker in the areas of program analysis. 



  • On the Single-Pass Streaming Complexity of the Set Cover Problem


    Speaker:

    Sanjeev Khanna, U. Penn

    Date:2016-12-20
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In the set cover problem, we are given a collection of sets over a universe of elements, and the goal is to find a sub-collection of these sets whose union covers the universe. The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. In this talk, we investigate the set cover problem in the streaming model of computation whereby the sets are presented one by one in a stream, and we wish to find an approximate set cover or simply estimate its size using a space-efficient algorithm.

    We give a tight resolution of the tradeoff between the space available to the algorithm and the quality of the set cover approximation. Our results also extend to the more general problem of estimating the optimal value of a covering integer program presented in a stream.

    This is joint work with my students Sepehr Assadi and Yang Li.


  • Synthesizing Heap Manipulations via Integer Linear Programming


    Speaker:

    Prof. Subhajit Roy

    Date:2016-12-15
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Writing heap manipulating programs is hard. Even though the high-level algorithms may be simple, it is often tedious to express them using low-level operations. We propose an algorithm that uses expression of intent in the form of concrete examples drawn using box-and-arrow diagrams to synthesize heap-manipulations automatically. Instead of modeling the concrete examples in a monolithic manner, we extract a set of patterns of manipulation that can be applied repeatedly to construct such programs. We, then, attempt to infer these patterns as linear transformations, leveraging the power of ILP solvers for program synthesis. We have successfully applied our algorithm to synthesize data-structure manipulations on singly and doubly linked-lists, AVL trees and binary search trees.


  • Empathic Human-Computer Interaction


    Speaker:

    Arindam dey, University of South Australia

    Date:2016-12-05
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Computer interfaces are currently capable of responding to instructions provided by the user. However, most of these interfaces do not receive or respond to the emotional needs of the user, which in many cases users themselves are also not fully aware of. Empathic human-computer interaction (HCI) is a field of HCI where human emotions are measured and used to change the state of the user interface in real time. In collaborative setups, the emotions can also be shared between the collaborators to make them aware of the emotional states of each other. Augmented and virtual realities are three-dimensional user interface technologies that can provide immersive and interactive experiences by either augmenting or completely replacing the real world with virtual information. These augmented and virtual reality interfaces, along with other two-dimensional interfaces, can be designed to provide empathic experiences. In this talk, I will present my past and present research using the above technologies along with potential future research directions.


  • The Approximations Vs. Abstractions Dilemma in Pointer Analysis


    Speaker:

    Prof. Uday Khedkar, CSE IIT Bombay

    Date:2016-11-24
    Time:11:00:00 (IST)
    Venue:IIA-501 Bharti Building
    Abstract:

    The world  of programming  is divided in  religious camps  with opposing
    views on the use of pointers.  The verification community as well as HPC
    community  abhors  pointers  while  the  hackers  adore  them. Compiler
    writers love the  convenience that pointers provide to  build fancy data
    structures but hate the difficulties that  arise in trying to optimize a
    program containing pointers.

    There has been a lot of  work on analyzing programs containing pointers.
    The mathematics  of pointer analysis  has been studied in  great details
    and  the findings  conclusively  say that  like  many other interesting
    problems, the analysis of pointers in  its full glory is undecidable and
    remains intractable even if some imprecision were to be tolerated. Given
    the vital importance of pointer  analysis and the inherent difficulty of
    performing  precise pointer  analysis  for practical  programs, a large
    fraction  of  pointer  analysis  community  has  come  to believe  that
    compromising on  precision is necessary for  scalability and efficiency.
    This  is  evidenced  by  the  fact  that  a  large  number  of reported
    investigations  in  pointer analysis  involve  a  significant amount  of
    engineering approximations.

    We find it hard to accept this assumption as the final inevitability. We
    believe that  a lot could  be gained by  exploring a science  of pointer
    analysis  that  tries to  build  clean  abstractions rather  than hasty
    approximations. In our opinion, this is a road less travelled in pointer
    analysis. Without undermining the engineering efforts, we propose that a
    search  for  approximations  should  begin  only  after  building clean
    abstractions and not  before it. The talk describes our  efforts in this
    direction. 


  • The Approximations Vs. Abstractions Dilemma in Pointer Analysis


    Speaker:

    Prof. Uday Khedkar, CSE IIT Bombay

    Date:2016-11-24
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The world of programming is divided in religious camps with opposing views on the use of pointers. The verification community as well as HPC community abhors pointers while the hackers adore them. Compiler writers love the convenience that pointers provide to build fancy data structures but hate the difficulties that arise in trying to optimize a program containing pointers.

    There has been a lot of work on analyzing programs containing pointers. The mathematics of pointer analysis has been studied in great details and the findings conclusively say that like many other interesting problems, the analysis of pointers in its full glory is undecidable and remains intractable even if some  imprecision were to be tolerated. Given the vital importance of pointer analysis and the inherent difficulty of performing precise pointer analysis for practical programs, a large fraction of pointer analysis community has come to believe that compromising on precision is necessary for scalability and efficiency.
    This is evidenced by the fact that a large number of reported investigations in pointer analysis involve a significant amount of engineering approximations.

    We find it hard to accept this assumption as the final inevitability. We believe that a lot could be gained by exploring a science of pointer analysis that tries to build clean abstractions rather than hasty approximations. In our opinion, this is a road less travelled in pointer analysis. Without undermining the engineering efforts, we propose that a search for approximations should begin only after building clean abstractions and not before it. The talk describes our efforts in this
    direction.


  • Targeted Client Synthesis to Detect Concurrency Bugs


    Speaker:
    Dr. Murali Krishna Ramanathan
    Date:2016-11-22
    Time:11:00:00 (IST)
    Venue:SIT 001
    Abstract:
    Detecting concurrency bugs can be challenging due to the
    intricacies associated with their manifestation. These intricacies
    correspond to identifying the methods that need to be invoked
    concurrently, the inputs passed to these methods and the interleaving
    of the threads that cause the erroneous behavior. Neither
    fuzzing-based testing techniques nor over-approximate static analyses
    are well positioned to detect subtle concurrency defects while
    retaining high accuracy alongside satisfactory coverage. While dynamic
    analysis techniques have been proposed to overcome some of the
    challenges in detecting concurrency bugs, we observe that their
    success is critically dependent on the availability of effective
    multithreaded clients. Without a priori knowledge of the defects,
    manually constructing defect-revealing multithreaded clients is
    non-trivial.
    
    In this talk, I will present our approach that addresses the problem
    of generating effective multithreaded clients.  Our technique analyzes
    the execution traces obtained from executing random sequential clients
    and produces a concurrent client program that drives shared objects
    via library methods calls to states conducive for triggering
    deadlocks, data races, atomicity violations or assertion violations.
    We have implemented prototypes that incorporate the aforementioned
    ideas and applied them on 29 well-tested concurrent classes from
    popular Java libraries, including the latest version of JDK. We are
    able to automatically synthesize clients that helped expose more than
    300 concurrency bugs. We reported many previously unknown bugs to the
    developers of these libraries resulting in either fixes to the code or
    changes to the documentation pertaining to the thread-safe behavior of
    the relevant classes.

    Bio:
    Murali Krishna Ramanathan is an Assistant Professor in the Department
    of Computer Science and Automation, IISc, Bangalore. His research
    interests are in programming languages and software engineering.
    Previously, he was a member of the analysis team at Coverity and
    helped build bug-detection tools that are widely used in the software
    industry. He received his PhD in Computer Science from Purdue
    University, USA.


  • Targeted Client Synthesis to Detect Concurrency Bugs


    Speaker:

    Dr. Murali Krishna Ramanathan

    Date:2016-11-22
    Time:11:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Detecting concurrency bugs can be challenging due to the intricacies associated with their manifestation. These intricacies correspond to identifying the methods that need to be invoked concurrently, the inputs passed to these methods and the interleaving of the  threads that cause the erroneous behavior. Neither fuzzing-based testing techniques nor over-approximate static analyses are well positioned to detect subtle concurrency defects while retaining high accuracy alongside satisfactory coverage. While dynamic analysis techniques have been proposed to overcome some of the challenges in detecting concurrency bugs, we observe that their success is critically dependent on the availability of effective multithreaded clients. Without a priori knowledge of the defects, manually constructing defect-revealing multithreaded clients is non-trivial.

    In this talk, I will present our approach that addresses the problem of generating effective multithreaded clients. Our technique analyzes the execution traces obtained from executing random sequential clients and produces a concurrent client program that drives shared objects via library methods calls to states conducive for triggering deadlocks, data races, atomicity violations or assertion violations. We have implemented prototypes that incorporate the aforementioned ideas and applied them on 29 well-tested concurrent classes from popular Java libraries, including the latest version of JDK. We are able to automatically synthesize clients that helped expose more than 300 concurrency bugs. We reported many previously unknown bugs to the developers of these libraries resulting in either fixes to the code or changes to the documentation pertaining to the thread-safe behavior of the relevant classes.


    Bio:

    Murali Krishna Ramanathan is an Assistant Professor in the Department of Computer Science and Automation, IISc, Bangalore. His research interests are in programming languages and software engineering. Previously, he was a member of the analysis team at Coverity and helped build bug-detection tools that are widely used in the software industry. He received his PhD in Computer Science from Purdue University, USA.



  • Brain-Computer Interfaces: Recent Advances and Future Applications


    Speaker:

    Rajesh P. N. Rao
    Director, Center for Sensorimotor Neural Engineering
    Department of Computer Science and Engineering
    University of Washington, Seattle, USA

    Date:2016-11-18
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Ideas such as telepathy and telekinesis have long been a staple of science fiction stories and movies. Recent advances in the field of brain-computer interfaces (BCIs) are bringing these ideas closer to reality. Recent demonstrations have ranged from direct brain control of cursors, robots, and prosthetic arms to direct brain-to-brain communication. BCIs are enabling communication in locked-in patients, rehabilitation and restoration of movement in paralyzed and disabled individuals, and are increasingly being used in novel applications such as security, lie detection, alertness monitoring, entertainment, gaming, education, and human augmentation. This talk will provide an overview of BCIs and recent developments in the field, with highlights from BCI research at the University of Washington and the Center for Sensorimotor Neural Engineering.


    Bio:

    Rajesh P. N. Rao is the Director of the NSF Center for Sensorimotor Neural Engineering and Professor of Computer Science and Engineering at the University of Washington. He is the recipient of a Guggenheim Fellowship, a Fulbright Scholar award, an NSF CAREER award, a Young Investigator Award from the Office of Naval Research, a Sloan Faculty Fellowship, and a Packard Fellowship for Science and Engineering. He is the author of the book Brain-Computer Interfacing (Cambridge University Press, 2013) and the co-editor of two volumes, Probabilistic Models of the Brain (MIT Press, 2002) and Bayesian Brain (MIT Press, 2007). His research spans the areas of computational neuroscience, AI, and brain-computer interfacing. Prof. Rao's group was the first to demonstrate direct brain control of a humanoid robot in 2007 and direct brain-to-brain communication in 2013. With Prof. Adrienne Fairhall, he offered the first MOOC (massively open online course) in computational neuroscience on Coursera. Prof. Rao's other interests include classical Indian paintings and the 4000-year-old undeciphered Indus script, a topic on which he has given a TED talk.



  • Incremental Program Obfuscation


    Speaker:

    Dr. Omkant Pandey (Stony Brook University)

    Date:2016-11-09
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is *incrementality*: small changes to the underlying program translate into small changes to the corresponding obfuscated program.

    In this talk, I will briefly discuss the notion of *incremental program obfuscation.* I will explain various notions of security for incremental obfuscation, its limits, and what is possible to achieve for the indistinguishability based notions.

    Towards the end, I will spend some time discussing the Ph.D. program at Stony Brook University.


    Bio:

    Omkant Pandey is an assistant professor at Stony Brook University. Previously, he was a senior researcher at University of California Berkeley. He is interested in the broad areas of secure computation and next-generation encryption systems. He is the winner of 2016 ACM CCS Test-of-Time Award for his work on Attribute Based Encryption. He received his Ph.D. from UCLA.



  • The Cognitive Era


    Speaker:

    Dr. Ravi Kothari (IBM-IRL)

    Date:2016-11-07
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The ability to capture, store and process increasingly large volumes of data makes it possible to gain deeper insights that can improve almost every aspect of our lives. In this talk we provide a broad overview of some of the exciting things that we are enabling for this new data-driven world.


    Bio:

    Ravi Kothari is an IBM Distinguished Engineer and the Chief Scientist of IBM Research - India



  • India Stack


    Speaker:

    Saurabh Panjwani

    Date:2016-10-31
    Time:15:00:00 (IST)
    Venue:SIT Seminar Room #113
    Abstract:

    India Stack​ is a suite of open technology standards being built by industry experts in India to ease product development in essential consumer-service verticals like finance and healthcare for Indian consumers. It is designed around India's national identity system, ​Aadhaar,​ currently being used in the deliverance of public utilities in the country. Based on Aadhaar, a series of technology standards for digital payments (the Universal payment interface (UPI)), data storage (the digital locker standard) and digital signatures are being developed as part of the India Stack. These standards will enable the development of applications which can deliver services efficiently to Indian citizens, minimizing the use of cash and paper, and enabling "presence-less" interaction with service providers. In this meeting, I will discuss research questions around India Stack and how academicians could get involved in the project.

     


  • QUADSEAL: A Hardware Countermeasure against Side channel Attacks on AES


    Speaker:

    Prof. Sri Parameswaran
    Embedded Systems Laboratory, School of Computer Science and Engineering
    University of New South Wales, Sydney

    Date:2016-10-24
    Time:11:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Deep devastation is felt when privacy is breached, personal information is lost, or property is stolen. Now imagine when all of this happens at once, and the victim is unaware of its occurrence until much later. This is the reality, as increasing amount of electronic devices are used as keys, wallets and files. Security attacks targeting embedded systems illegally gain access to information or destroy information. Advanced Encryption Standard (AES) is used to protect many of these embedded systems. While mathematically shown to be quite secure, it is now well known that AES circuits and software implementations are vulnerable to side channel attacks. Side-channel attacks are performed by observing properties of the system (such as power consumption, electromagnetic emission, etc.) while the system performs cryptographic operations. In this talk, differing power based attacks are described, and various countermeasures are explained. In particular, a countermeasure titled Algorithmic Balancing is described in detail. Implementation of this countermeasure in hardware and software is described. Since process variation impairs countermeasures, we show how this countermeasure can be made to overcome process variations.


    Bio:

    Sri Parameswaran is a Professor in the School of Computer Science and Engineering at the University of New South Wales. He also serves as the Postgraduate Research and Scholarships coordinator at the same school. Prof. Parameswaran received his B. Eng. Degree from Monash University and his Ph.D. from the University of Queensland in Australia. He has held visiting appointments at University of California, Kyushu University and Australian National University. He has also worked as a consultant to the NEC Research laboratories at Princeton, USA and to the Asian Development Bank in Philippines. His research interests are in System Level Synthesis, Low power systems, High Level Systems, Network on Chips and Secure and Reliable Processor Architectures. He is the Editor-in-Chief of IEEE Embedded systems Letters. He serves or has served on the editorial boards of IEEE Transactions on Computer Aided Design, ACM Transactions on Embedded Computing Systems, the EURASIP Journal on Embedded Systems and the Design Automation of Embedded Systems. He has served on the Program Committees of Design Automation Conference (DAC), Design and Test in Europe (DATE), the International Conference on Computer Aided Design (ICCAD), the International Conference on Hardware/Software Codesign and System Synthesis (CODES-ISSS), and the International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES).



  • Legal, Non-legal...and beyond


    Speaker:

    Kripabandhu Ghosh

    Date:2016-09-30
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Information Retrieval from Legal documents is an emerging research area. Legal IR has posed unprecedented challenges to the scientists. In this talk, the speaker will look to address some of the prominent problems in the legal domain (which he has explored during his PhD) along with their proposed solutions. In addition, the speaker will also discuss some important problems in information extraction from online social media texts (e.g., Tweets) during disaster situations.


    Bio:

    Kripabandhu Ghosh has recently defended his Ph.D. in Computer Science from Indian Statistical Institute, Kolkata. He has also been an International Scholar from September 2015 to January 2016 at KU Leuven, Belgium. His broad area of interest is Information Retrieval. More specifically, he is interested in Information Retrieval on Legal Domain, improving retrieval performance on Noisy Text in the absence of training data, optimization techniques on Data Fusion, Query Reweighting and Expansion, personalized advertisement built from web sources, Online Social Media text retrieval and analysis etc. He has been in the Organizing Committee / Program Committee of the Forum for Information Retrieval Evaluation (FIRE: http://fire.irsi.res.in/fire/2016/home) since 2011. He has co-organized FIRE 2013 “Information access in legal domain” track (http://www.isical.ac.in/~fire/2013/legal.html) and Personalised AdveRtisements buIlt from web Sources (PARIS) workshop (http://www.parisproject.be/workshop_pic/program_schedule.pdf). He is currently co-organizing FIRE “Information Extraction from Microblogs Posted during Disasters” track (https://sites.google.com/site/fire2016microblogtrack/information-extraction-from-microblogs-posted-during-disasters).



  • Learning, Games and Sample Complexity


    Speaker:

    Arnab Basu, IIM Bangalore

    Date:2016-09-29
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    We talk about some of the open problems investigated by this author spanning across learning, games and sample complexity. The machinery needed includes probability, geometry and topology, and dynamical systems. Though several applications to modern day data analysis can be naturally derived from the techniques discussed herein, the talk is mostly theoretical.


    Bio:

    Arnab Basu is a computer scientist and applied mathematician currently working as an Associate Professor of Decision Sciences and Information Systems at the Indian Institute of Management (IIM) Bangalore since 2010. He has also been the External Expert Consultant to Data Sciences Research for InMobi Technologies for a period of one year from April 2015 to March 2016. He holds a B.Tech(H) degree in CSE from IIT Kharagpur and a Ph.d in Computer and System Sciences from TIFR. His research interest is interdisciplinary with deep applications to large-scale data analysis, algorithms, machine learning, optimization and algorithmic economics. 



  • Breaking the Memory Wall in Multi-core Systems through Prefetch and Compression Engines


    Speaker:

    Biswabandan Panda, INRIA Rennes

    Date:2016-09-22
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Historically, the rate of improvement in memory speed is lower than the improvement in processor speed, which creates a gap between processor speed and memory speed. This inevitably leads to the "memory wall" where processor cores' pipelines stall, waiting for memory accesses. Moreover, the advent of multi-core systems in recent years, exacerbates the memory wall as multiple applications contend with each other for shared resources such as off-chip bandwidth.

    This talk will cover two hardware techniques in the form of prefetch engines and cache compression engines that try to break the memory wall. Prefetch engines speculatively bring data into the cache, converting cache misses into cache hits. Cache compression engines also increase the cache hit ratio by storing more data in less bytes, increasing the effective cache capacity. The talk will provide a brief introduction on prefetch engines (especially aggressiveness engines) and cache compressors followed by a detailed description of my recent research work, such as synergistic prefetcher aggressiveness controller and dictionary sharing based cache compression.


    Bio:

    Biswabandan Panda is a post-doctoral researcher at the PACAP team in INRIA, Rennes. He received his Ph.D. from Indian Institute of Technology Madras, where he was a recipient of TCS Ph.D. fellowship. His area of research includes hardware prefetching, cache/DRAM Compression, and memory sub-system management for multi-core systems.



  • Efficient Similarity Search across Top-k Lists under the Kendalls Tau Distance


    Speaker:

    Koninika Pal, TU Kaiserslautern

    Date:2016-09-16
    Time:12:00:00 (IST)
    Venue:SIT Building Room #113
    Abstract:

    We consider the problem of similarity search in a set of top-k lists under the generalized Kendall's Tau distance function. This distance describes how related two rankings are in terms of discordantly ordered items. We consider pair- and triplets-based indices to counter the shortcomings of naive inverted indices and derive efficient query schemes by relating the proposed index structures to the concept of locality sensitive hashing (LSH). Specifically, we devise four different LSH schemes for Kendall's Tau using two generic hash families over individual elements or pairs of them. We show that each of these functions has the desired property of being locality sensitive. Further, we discuss the selection of hash functions for the proposed LSH schemes for a given query ranking, called query-driven LSH and derive bounds for the required number of hash functions to use in order to achieve a predefined recall goal. Experimental results, using two real-world datasets, show that the devised methods outperform the prefix-filtering method, SimJoin method; the state of the art method to query for similar sets, and are far superior to a plain inverted-index-based approach.


    Bio:

    Koninika Pal is a fourth year PhD student at TU Kaiserslautern under the supervision of Sebastian Michel. Her research interests are in ranking, querying and performance study for similarity search in big data.



  • Interactive Data Science


    Speaker:

    Tim Kraska, Brown University, USA

    Date:2016-09-09
    Time:15:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Unleashing the full potential of Big Data requires a paradigm shift in the algorithms and tools used to analyze data towards more interactive systems with highly collaborative and visual interfaces. Ideally, a data scientist and a domain expert should be able to make discoveries together by directly manipulating, analyzing and visualizing data on the spot, instead of having week-long forth-and-back interactions between them. Current systems, such as traditional databases or more recent analytical frameworks like Hadoop or Spark, are ill-suited for this purpose. They were not designed to be interactive nor to support the special requirements of visual data exploration. Similarly, most machine learning algorithms are not able to provide initial answers at "human speed" (i.e., sub-seconds), nor are existing methods sufficient to convey the impact of the various risk factors, such as approximation errors or incompleteness within the data.

    In this talk, I will present our vision of a new approach for conducting interactive exploratory analytics and explain why integrating the aforementioned features requires a complete rethinking of the full analytics stack, from the interface to the "guts". I will present recent results towards this vision including our novel interface, analytical engine and index structure, and outline what challenges are still ahead of us.


    Bio:

    Tim Kraska is an Assistant Professor in the Computer Science department at Brown University. Currently, his research focuses on Big Data management systems for modern hardware and new types of workloads, especially interactive analytics. Before joining Brown, Tim spent 3 years as a PostDoc in the AMPLab at UC Berkeley, where he worked on hybrid human-machine database systems and cloud-scale data management systems. Tim received his PhD from the ETH Zurich under the supervision of Donald Kossmann. He was awarded an NSF Career Award (2015), an Airforce Young Investigator award (2015), a Swiss National Science Foundation Prospective Researcher Fellowship (2010), a DAAD Scholarship (2006), a University of Sydney Master of Information Technology Scholarship for outstanding achievement (2005), the University of Sydney Siemens Prize (2005), two VLDB best demo awards (2015 and 2011) and an ICDE best paper award (2013).



  • Some Perspectives in (Big) Social Data Exploration


    Speaker:

    Sihem Amer-Yahia, CNRS, France

    Date:2016-09-09
    Time:16:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    The web has evolved from a technology platform to a social milieu where factual, opinion and behavior data interleave. A number of social applications are being built to analyze and extract value from this data, encouraging us to adopt a data-driven approach to research. I will provide different formulations of social data exploration and describe recent work on extracting user segments on the social Web. More specifically, I will talk about extracting and exploring user segments on collaborative rating sites. No prior knowledge of this area is required.

    This talk is based on collaborations with colleagues at UT Austin, NJIT, University of British Columbia, and Google Research.


    Bio:

    Sihem Amer-Yahia is DR1 CNRS at LIG in Grenoble where she leads the SLIDE team. Her research interests are in social data management and crowdsourcing. She previously held research positions at QCRI, Yahoo! Research and AT&T Labs. Sihem served on the ACM SIGMOD Executive Board and the VLDB Endowment. She is Editor-in-Chief of the VLDB Journal and is on the editorial boards of TODS and Information Systems. She is currently chairing the VLDB Workshops 2016 and will chair VLDB 2018. Sihem completed her Ph.D. in Computer Science at INRIA in 1999, and her Diplôme d'Ingénieur at INI, Algeria.



  • Tracking the Conductance of Rapidly Evolving Topic-Subgraphs


    Speaker:

    Sainyam Galhotra, UMass, Amherst

    Date:2016-09-02
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Monitoring the formation and evolution of communities in large online social networks such as Twitter is an important problem that has generated considerable interest in both industry and academia. Fundamentally, the problem can be cast as studying evolving sub- graphs (each subgraph corresponding to a topical community) on an underlying social graph - with users as nodes and the connection between them as edges. A key metric of interest in this setting is tracking the changes to the  conductance of subgraphs induced by edge activations. This metric quantifies how well or poorly con- nected a subgraph is to the rest of the graph relative to its internal connections. Conductance has been demonstrated to be of great use in many applications, such as identifying bursty topics, tracking the spread of rumors, and so on. However, tracking this simple metric presents a considerable scalability challenge - the underlying social network is large, the number of communities that are active at any moment is large, the rate at which these communities evolve is high, and moreover, we need to track conductance in real-time. We address these challenges in this paper. We propose an in-memory approximation called BloomGraphs to store and update these (possibly overlapping) evolving subgraphs. As the name suggests, we use Bloom filters to represent an approx- imation of the underlying graph. This representation is compact and computationally efficient to maintain in the presence of updates. This is especially important when we need to simultaneously main- tain thousands of evolving subgraphs. BloomGraphs are used in computing and tracking conductance of these subgraphs as edge- activations arrive. BloomGraphs have several desirable properties in the context of this application, including a small memory footprint and efficient updateability. We also demonstrate mathematically that the error incurred in computing conductance is one-sided and that in the case of evolving subgraphs the change in approximate conductance has the same sign as the change in exact conductance in most cases.


    Bio:

    Sainyam Galhotra is a Graduate student at University of Massachusetts (UMass), Amherst, USA. He spent his summer as a Research Intern @ Google Research, Mountain View. His research interests include graph analysis, data mining and causality analytics. Prior to pursuing graduate studies, he worked as a budding scientist at the Xerox Research Centre India (XRCI) where he contributed towards designing algorithms for real time analysis of problems pertaining to social media. He obtained his undergraduate degree in computer science from the Indian Institute of Technology (IIT), Delhi. During his undergraduate he worked with Prof. Amitabha Bagchi, Prof. Maya Ramanath and Dr. Srikanta Bedathur(IBM). His works have been published in top data mining and machine learning conferences. Sainyam can be contacted at sainyam@cs.umass.edu 



  • An Equivalence-Checker for Assembly Programs with Loops


    Speaker:

    Dr. Saurav Bansal, CSE Department

    Date:2016-08-31
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Could there be a compiler that outputs the "optimal" code for a program specification? The problem is undecidable in general, and efforts in this direction are often called 'superoptimizers' (like the superman). I will briefly discuss how superoptimizers work, and then discuss our efforts on integrating support for loop-based optimizations inside a superoptimizer.

     Supporting loops in a superoptimizer, can yield up to 12x performance improvements over state-of-the-art compilers. A fundamental requirement for realizing this, is an assembly program equivalence checker that can support loops. We report the design and implementation of an equivalence checker, Q, that supports programs with loops, and discuss our experiences with testing it both within a superoptimizer, and on end-to-end compiler-based transformations on programs much larger than the current capabilities of any superoptimizer.

     


  • Verifying Security Properties in Modern SoCs Using Instruction-Level Abstractions


    Speaker:

    Pramod Subramanyan, Princeton

    Date:2016-08-29
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Modern Systems-on-Chip (SoC) designs comprise interacting intellectual property (IP) blocks which are often sourced from third-party vendors and contain both hardware accelerators and programmable cores executing firmware. Verifying that SoCs meet their security requirements in this context is especially challenging. These challenges relate to (i) specifying security properties for verification, and (ii) co-verification of these properties across firmware (FW) and hardware (HW).

    This talk will describe a methodology for system-level security verification of SoCs. We address the FW/HW co-verification challenge by raising the level of abstraction of the hardware modules to be similar to that of instructions in software/firmware. This abstraction, referred to as an instruction-level abstraction (ILA), plays a role similar to the instruction set architecture (ISA) for general purpose processors and enables high-level analysis of SoC firmware. In particular, the ILA can be used instead of the cycle-accurate and bit-precise register transfer level (RTL) implementation for scalable verification of system-level security properties in SoCs.

    Manual construction of the ILA in the context of third-party IPs can be challenging. Hence, we introduce techniques to semi-automatically synthesize the ILA using a template abstraction and directed simulations of the SoC hardware. We describe techniques to ensure that the ILA is a correct abstraction of the underlying hardware implementation. We then show how the ILA can be used for SoC security verification by designing a specification language for security properties and an algorithm based on symbolic execution to verify these properties. We discuss two case studies of applying ILA-based verification to (i) an SoC built out of open source components and (ii) part of a commercial SoC. The methodology discovered several bugs in the RTL implementations, simulators and firmware.


  • Testing Quantum Devices and Quantum Mechanics


    Speaker:

    Prof Umesh Vazirani, EECS, UC Berkeley

    Date:2016-08-26
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The tremendous recent progress in the physical realization of devices based on the principles of quantum mechanics also throws up a fundamental challenge: how to test quantum devices, which are by nature imperfect and susceptible to uncontrollable faults. The classical verifier of such a device is necessarily at a disadvantage due to the
    exponential power of quantum systems, but nevertheless, an exciting sequence of results show that uniquely quantum features such as entanglement (Einstein's "spooky action at a distance") can be leveraged to make such testing possible. I will describe how such testing can thwart a malicious adversary in quantum cryptographic settings such as  quantum key distribution and certifying quantum random numbers. More sophisticated such schemes can be used to verify that a quantum computer is truly quantum. At a conceptual level, such tests of quantum devices are really tests of quantum mechanics, that go well beyond the famous Bell tests that disproved Einstein's objections to quantum mechanics.

    I will also describe a more pragmatic but principled approach to the testing of large scale quantum annealers - by performing a quantum Turing test comparing the quantum annealer to a suitable classical benchmark. I will discuss the results of applying such a test to the D-Wave 108 qubit quantum annealer, as well as the ~1000 qubit D-Wave 2X quantum annealer.

    The talk will be accessible to broad audience of computer scientists, physicists and mathematicians.


    Bio:

    Umesh Vazirani is the Rogrer A. Strauch Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and director of the Berkeley Quantum Computation Center. He is one of the founders of the field of quantum computation, a recipient of the Fulkerson Prize in discrete math awarded by the American Mathematical Association, and author of the books “An Introduction to Computational Learning Theory'' with Michael Kearns, and "Algorithms" with Sanjoy Dasgupta and Christos Papadimitriou.

     



  • GPS: An Indispensable Technology


    Speaker:

    Prof. Pratap Misra, Tufts University

    Date:2016-08-24
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    GPS in many ways is like the Internet. Both are gifts of the U.S. Department of Defense (DoD) to the civil community. Both continue to transform the way we do ordinary, everyday things as individuals and society, delivering wide-ranging economic and social benefits far beyond anything their designers could have dreamt of. And, of course, the Internet now plays an important role in newer GPS applications.

    Operational for about 20 years, GPS is now used by over 1 billion people, mostly as a receiver-on-a-chip built into their smart phones. GPS has also raised our expectations: we now demand meter-level positioning capability anytime and anywhere, including indoors. And, inevitably, success brings imitators: GPS-like systems are under development in Russia, Europe, China, Japan, and India.

    This talk will focus mostly on the principles behind the operation of GPS and the technologies required to realize a global radio-navigation system based on them.

     


    Bio:

    Pratap Misra, a member of the first batch of students to graduate from IITK, has worked in the field of navigation satellites for 25 years starting with a project at MIT Lincoln Laboratory to combine measurements from GPS and GLONASS, the Soviet answer to GPS, to improve navigation for civil aviation. He is a coauthor with Professor Per Enge of Stanford University of a widely used graduate-level engineering textbook on GPS.

    Misra is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He is also a Fellow of the Institute of Navigation (ION), which honored him in 2014 with the Kepler Award "for sustained and significant contributions to the development of satellite navigation."

     



  • Timed Network Games


    Speaker:

    Shibashis Guha, Hebrew University

    Date:2016-08-23
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Network games are widely used as a model for selfish resource-allocation problems. In the classical model, each player selects a simple path connecting her source and target vertex. The cost of using an edge depends on the number of players that use it, and abstracts the fact that different users may use a resource at different times and for different durations. In reality, however, time plays an important role. Sharing the cost of a taxi, for example, makes sense only when traveling together.

    We study timed network games, which add a time component to network games. Each vertex v in the network is associated with a cost function, mapping the load on v to the price that a player pays for staying in v for one time unit with this load. In addition, each edge has a guard, describing time intervals in which the edge can be traversed, forcing the players to spend time on vertices. Unlike earlier work that add a time component to network games, the time in our model is continuous and cannot be discretized. In particular, players have uncountably many strategies, and a game may have uncountably many pure Nash equilibria. We study properties of timed network games with cost-sharing or congestion cost functions: their stability, equilibrium inefficiency, and complexity. In particular, we show that the answer to the question whether we can restrict attention to boundary strategies, namely ones in which edges are traversed only at the boundaries of guards, is mixed.

    (Joint work with Guy Avni and Orna Kupferman)


  • Separations and Lower Bounds in Decision Tree Complexity


    Speaker:

    Swagato Sanyal(TIFR)

    Date:2016-08-22
    Time:15:00:00 (IST)
    Venue:CSE Discussion Room (IIA 425)
    Abstract:

    The Decision Tree model is one of the simplest models of computation, where it is often possible to prove unconditional lower bounds. There are different kinds of decision trees: deterministic, randomized with zero-error or bounded-error, quantum, etc. A central question in decision tree complexity is whether randomness really helps in this model, which has long been answered in the affirmative. Interestingly however, the exact relationship between different decision tree complexity measures, and the widest separation possible amongst them, was not well-understood for a long time until recently by a series of works many of these fundamental questions got answered. In this talk we will review this recent series of developments, restricting ourselves mostly to deterministic and different randomized decision tree complexity measures.


  • Engineering Software Defined Network (SDN) for Scale


    Speaker:

    Deepak Bansal, Microsoft USA

    Date:2016-08-09
    Time:15:00:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    Microsoft Azure is one of the largest public cloud infrastructure in the world. The network infrastructure for Microsoft Azure is built on a highly scalable and reliable Software Define Network (SDN). In this talk, Deepak will talk about the SDN implementation in Microsoft Azure – motivation, capabilities, architecture, components, etc. He will specifically focus on the challenges dealing with scale and delivering the high reliability and what has been done to meet those challenges and learnings obtained.

     


    Bio:

    Deepak Bansal runs the Software Defined Network (SDN) team for Microsoft. He has been leading Networking for Azure since 2008, being a member of the founding team for Microsoft Azure. He pioneered the implementation and deployment of SDN in Microsoft Azure scaling it to millions of customer networks and delivering millions of automated customer initiated network changes per day. He is active member of the SDN community, chairing configuration management working group in ONF and part of ONF board. Prior to Azure/SDN, he led the development of NetIO TCP/IP stack and IPv6 implementation for Windows. He has masters and bachelors in computer science from MIT and IIT Delhi respectively. He holds over 20 issued patents. His areas of interest include SDN, NFV, congestion control, wireless networks, network protocols, cloud, distributed systems, reliability and scale.

     

     



  • Lightweight Formal Methods for LLVM Verification


    Speaker:

    Prof. Santosh Nagarakatte from CS Department, Rutgers University

    Date:2016-08-08
    Time:03:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Peephole optimizations perform local rewriting to improve theefficiency of the code input to the compiler. These optimizations are a persistent source of bugs. The talk will present Alive and Alive-FP, domain-specific languages for writing integer and floating-point related peephole optimizations in LLVM, respectively. A transformation expressed in these DSLs is shown to be correct automatically by encoding the transformation into constraints whose validity is checked using a Satisfiability Modulo Theory (SMT) solver. Furthermore, an optimization expressed in the DSL can be automatically translated into C++ code that is suitable for inclusion in an LLVM optimization pass.

    These DSL’s are based on an attempt to balance usability and formal methods. We have discovered numerous bugs in the LLVM compiler and the Alive tool is actively used by LLVM developers to check new optimizations. I will also highlight the challenges in incorporating lightweight formal methods in the tool-chain of the compiler developer
    and the lessons learned in this project. I will conclude by briefly describing other active projects in my group on approximation, parallel program testing, and verification.


    Bio:

    Santosh Nagarakatte is an Assistant Professor of Computer Science at Rutgers University. He obtained his PhD from the University of Pennsylvania in 2012. His research interests are in Hardware-Software Interfaces spanning Programming Languages, Compilers, Software Engineering, and Computer Architecture. His papers have been selected as IEEE MICRO TOP Picks papers of computer architecture conferences in 2010 and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI 2015 Distinguished Paper Award, ACM SIGSOFT ICSE 2016 Distinguished Paper Award, and the Google Faculty Research Award in 2014 for his research on LLVM compiler verification.



  • Cheap approximate localization using FM Radio


    Speaker:

    Piyush Kumar 

    Date:2016-05-12
    Time:14:30:00 (IST)
    Venue:IIA, 101 , seminar room Bharti building
    Abstract:

    It is hard to imagine a world without GPS. Unfortunately, GPS might be jammed, spoofed or become unavailable. In this talk, I will present a coarse and passive localization system for GPS-denied environments. Our system is based on a cheap software defined radio~(SDR) costing , which is used to listen to broadcast signals from local FM Radio stations. We show that the hardware and associated algorithms are capable of localizations with average errors of less than 5 miles, without requiring a fingerprinting or crowd sourcing approach.


    Bio:

    Piyush Kumar is currently visiting IIT Delhi on a Fulbright fellowship. He received his B.Sc degree in Mathematics and Computing from IIT Kharagpur in 1999 and a Ph.D. in Computer Science from Stony Brook University in 2004. Prior to joining FSU as an Assistant Professor in 2004, he was a visiting scientist at MPI-Saarbruecken, Germany. He currently is an Associate Professor of Computer Science at Florida State University.



  • Exploiting Maximal Overlap for Non-Contiguous Data Movement On Modern GPU Enabled Clusters


    Speaker:

    Dipsankar Banerjee, Ohio State University

    Date:2016-05-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In modern generation high end computing platforms, Message Passing Interface (MPI) continues to hold wide spread popularity due to its portability and features that allow applications to tap the power of today's high performance computers. In recent years, high performance clusters have been mostly heterogeneous and a compute node is equipped with a high throughput accelerator such as a Graphics Processing Unit (GPU). Keeping such heterogeneity in mind, MPI has evolved to allow application developers to utilize the compute power of such accelerated nodes through the design of new communication protocols that can perform communications over GPU memories in a transparent manner. This is now well established as "CUDA aware MPI".

    In this talk, I will present the current state-of-the art designs for CUDA aware MPI implementation and then provide an overview of how CUDA aware MPI has been enhanced in the recent time to design a mechanism for higher overlap between CPU compute and GPU based communication. I will elaborate on how the massive parallelism offered by GPUs can be combined with some of the modern features provided by the GPU runtime to design a high throughput mechanism for non-contiguous data movement that can provide higher system efficiency and overlap.


  • Interactive Machine Learning for Information Extraction


    Speaker:

    Sameer Singh

    Date:2016-05-02
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Most of the world's knowledge, be it factual news, scholarly research, social communication, subjective opinions, or even fictional content, is now easily accessible as digitized text. Unfortunately, due to the unstructured nature of text, much of the useful content in these documents is hidden. The goal of information extraction is to address this problem: extracting meaningful, structured knowledge (such as graphs and databases) from text collections. The biggest challenges when using machine learning for information extraction include the high cost of obtaining annotated data and lack of guidance on how to understand and fix mistakes.

    In this talk, I propose interpretable representations that allow users and machine learning models to interact with each other: enabling users to inject domain knowledge into machine learning and for machine learning models to provide explanations as to why a specific prediction was made. I use these techniques to extract relations between entities that are expressed in text. I first describe how symbolic domain knowledge, if provided by the user as first-order logic statements, can be injected into embedding-based machine learning models to improve the predictions. In the second part of the talk, I present an approach to “explain” machine learning predictions using symbolic representations, which the user may annotate for more effective supervision. I present experiments to demonstrate that an interactive interface between a user and machine learning is effective in reducing annotation effort and in quickly training accurate extraction systems.


    Bio:

    Sameer Singh is a Postdoctoral Research Associate at the University of Washington, working on large-scale and interactive machine learning applied to information extraction and natural language processing. He received his PhD from the University of Massachusetts, Amherst, during which he also interned at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was recently selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, has been awarded the Yahoo! Key Scientific Challenges and the UMass Graduate School fellowships, and was a finalist for the Facebook PhD fellowship. (http://sameersingh.org)



  • Analytics - An insightful enabler across the hydrocarbon value chain


    Speaker:

    Steve Cherek and Vinit Verma

    Date:2016-04-29
    Time:10:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The techno-economic dynamic is shifting the balance from an Industrial Paradigm to a Digital paradigm. Cheap information processing, storage, and transmission is allowing more data to be processed, and more people and devices to be connected than ever before. This is resulting in datafication - a process by which phenomena are transformed into quantifiable formats. We now have a quantitative basis for decision making like never before: We can represent reality to a much
    greater degree - tightening the feedback loop. Ubiquitous data makes possible new connections and value opportunities, both within and across industry value chains

    Data mediated connections open new opportunities to leverage network effects In an industrial economy, scale determined winners and losers, in the information economy taking advantage of network effects determines success. We now compete on insights and not on the paradigm of data ownership. The challenge is to get to the Right Data in Big Data. The talk will discuss how ExxonMobil is responding to this techno-economic dynamic using data science and analytics. We will cover the scale and diversity of ExxonMobil operations, and the breadth and depth of IT capability which is needed to run our enterprise. Finally we will show how Analytics is a key enabler for transformational outcomes across the hydrocarbon value chain.


    Bio:

    Steve Cherek ExxonMobil Applications Manager - Information Technology
    Steve received his undergraduate degree in Management Information Systems from the University of Oklahoma in 1989. That same year he began his career with Exxon Company as an application support analyst in Houston. Throughout his career, Steve has held a variety of business and IT roles. In 1999, he became the IT manager of Esso Thailand, based in Bangkok. In 2001, he served as a Senior Business Analyst for the U.S. Fuels Marketing, then as a Regional Sales Manager for Lubricants and Petroleum Specialties. He re-joined IT in various Business IT Manager positions. Steve assumed his current role as Applications Manager in 2011. Steve currently resides in Houston with his wife and two children. He is active with United Way, and is a member of the Board of Directors of the DePelchin Children's Center, a United Way Agency.

    Vinit Verma ExxonMobil Analytics Manager - Information Technology
    During his 25 years at ExxonMobil, Vinit has held a variety of global computing positions supporting the Exploration Company, R&D organizations, Chemical manufacturing organization, Fuels Retail Operations and Upstream Financials. As founder of techSpring innovation, he has a passion for the process of innovation. Vinit has Silicon Valley consulting experience with HP, Sun Microsystems and networking & insurance companies. His interests include research work with MIT on collaborative innovation networks. Vinit received a Master of Science degree in Computer Science from Oklahoma State University, and a Bachelor of Sciences degree (Physics, Chemistry, Math) from St. Stephen's College, Delhi.



  • Security Games: Key Algorithmic Principles, Deployed Applications and Research Challenges


    Speaker:

    Prof. Milind Tambe from University of Southern California, Los Angeles,

    Date:2016-04-22
    Time:17:00:00 (IST)
    Venue:LHC
    Abstract:

    Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, protecting endangered wildlife, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times; instead, we must optimize the use of limited security resources. To that end, our "security games" framework -- based on computational game theory, while also incorporating elements of human behavior modeling, AI planning under uncertainty and machine learning -- has led to building and deployment of decision aids for security agencies in the US and around the world. These decision aids are in use by agencies such as the US Coast Guard, the Federal Air Marshals Service and by various police agencies at university campuses, airports and metro trains.
    Moreover, recent work on "green security games" has led our decision aids to begin assisting NGOs in protection of wildlife; and "opportunistic crime security games" have focused on suppressing urban crime. I will discuss our use-inspired research in security games that is leading to new research challenges, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors.


    Bio:

    Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC). He is a fellow of AAAI and ACM, as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, INFORMS Wagner prize for excellence in Operations Research practice, Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa foundation faculty research award, RoboCup scientific challenge award, and other local awards such as the Orange County Engineering Council Outstanding Project Achievement Award, USC Associates award for creativity in research and USC Viterbi use-inspired research award. Prof. Tambe has contributed several foundational papers in AI in areas such as multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the influential paper award and a dozen best paper awards at conferences such as AAMAS, IJCAI, IAAI and IVA. In addition, Prof. Tambe pioneering real-world deployments of ''security games'' has led him and his team to receive the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District's Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by LA Airport police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. He has also co-founded a company based on his research, ARMORWAY, where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.



  • Better algorithmic bound for the Koml'{o}s discrepancy problem


    Speaker:

    Shashwat Garg, TU Eindhoven

    Date:2016-04-22
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The Koml'{o}s conjecture and its special case Beck-Fiala conjecture are challenging open problems in discrepancy theory and convex geometry. The best known bounds of both are due to Banaszczyk from 1998, using a non-constructive convex geometric argument. While there had been a lot of progress in past few years in algorithms for discrepancy theory, matching Banaszczyk's bounds algorithmically seemed challenging. I will talk about our recent work which gives an algorithm giving the same bounds as Banaszczyk. I will also mention something about the more general theorem proved by Banaszczyk. Our result uses ingredients from semidefinite programming duality, martingales theory and convex geometry.


  • Online algorithms with recourse


    Speaker:

    Prof. Amit Kumar, Computer Science, IIT Delhi

    Date:2016-04-21
    Time:15:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Online algorithms model situations where the input data arrives over time, and the algorithm needs to take decisions without knowing the future input. They are typically analyzed in the framework of competitive analysis -- the performance of such an algorithm is compared against an adversary which knows the entire future beforehand. Although this has been a very fruitful approach, it often leads to pessimistic results because the adversary is much more powerful than the online algorithm. Recently, there have been attempts to evolve alternate ways of analyzing online algorithms which give more power to the online algorithm (as compared to the offline adversary). I will discuss some recent work on models which allow the algorithm to change a small number of decisions taken in the past. Drawing from examples in scheduling and graph algorithms, I will show that one can get interesting results in this ''online algorithms with recourse'' model.


  • Adaptive Broadcast by Fault-Local Self-Stabilizing Spanning Tree Switching


    Speaker:

    Sushanta Karmakar, IIT Guwahati

    Date:2016-04-11
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Switching between protocols is an elegant idea of enabling adaptation in distributed systems. Self-stabilizing algorithms have been proposed as a mechanism to handle transient failures in distributed systems. In this work we illustrate self-stabilization in distributed protocol switching by proposing a self-stabilizing distributed algorithm for dynamically switching from one spanning tree to another for broadcasting messages. Both the trees are precomputed and rooted at the same node, which is the source of the broadcast. Different properties relating to the delivery of broadcast messages under various failure conditions are investigated. More specifically, the algorithm guarantees that under no failure each broadcast message is correctly delivered to all the nodes in spite of switching. Under arbitrary transient failure in the system, the switching eventually completes, though no specific guarantee can be given on the delivery of broadcast messages. Finally we investigate the properties that can be guaranteed on the delivery of broadcast messages under a single transient failure. We also consider the case when the trees are not precomputed.


  • Automata and logics over infinite alphabets


    Speaker:

    Amaldev Manuel, CMI

    Date:2016-03-31
    Time:14:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Languages over infinite alphabets provide a useful abstraction for systems with an element of unbounded data, for instance identity information. In this talk I will give an overview of proposals, including our own, to the paradigmatic question of finding suitable logical, automatic, and algebraic models for them.


  • High performance parallel computing software: from basics to exa-scale technology


    Speaker:

    Dr. Reiji SUDA

    Date:2016-03-22
    Time:11:30:00 (IST)
    Venue:404, Bharti building
    Abstract:

    Computer clock frequency is not improved for these years, and progress of semiconductor technology contributes computing performance mostly through higher parallelism. Not only supercomputers, but also PCs and embedded processors now have multiple cores, and will have 64 or 128 cores in a predictable future. Supercomputer performance will be close to 1 exa (1,000,000,000,000,000,000) flops.
    In this lecture, basic concepts of software (i.e. programming) technology for high performance parallel computing are introduced. Analysis on key factors of parallel processing performance reveals that, parallelism is not enough for performance, but locality is also essential. Key features of predicted parallel processors in the next generation will be briefly inspected, and some recent research efforts toward exa-scale parallel processing will be explained.


    Bio:

    Dr. Reiji SUDA is a Professor of Department of Computer Science, the University of Tokyo. From his master's work in early 1990's, he is working in high performance numerical computing, such as: parallel sparse linear solver for circuit simulation, parallel Hessenberg double-shift QR algorithm, Fast Multipole Method (FMM), fast spherical harmonic transform, task scheduling for parallel systems, automatic performance tuning (or autotuning), GPGPU, power-efficient computing, and communication-avoiding algorithms. He teaches courses in numerical algorithms for undergraduate students, and a course in parallel numerical computing for graduate students in the University of Tokyo.



  • Some problems in low-level computer vision and 3D multi-view geometry


    Speaker:

    Pulak Purakit, ISI Kolkata 

    Date:2016-03-21
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In this talk, I will address some existing problems in low-level computer vision and 3D multi-view geometry

    On the Morphological Regularizations for Super-resolution image reconstruction
    ∗ Why do we care about this problem? - Regularization is a well-known technique to convert an ill-posed problem into a well-posed one and then obtain a stable solution.
    ∗ What methods have been studied? - Researchers have been studying total variation (TV) based regularization methods for the general inverse problem for last few      decades. However, the images reconstructed by these methods, sometimes produce ringing artifacts near strong edges.

    ∗ What do we propose? - Based on multi-scale morphology, we develop non-linear regularization methods for edge preserving image reconstruction for general ill-posed inverse problems (in particular Super-Resolution image reconstruction) that produce better-reconstructed images.
    On the case of large Hyperedges for higher order clustering
    ∗ Why do we care?- Most of the existing works on hypergraph clustering have considered the smallest possible hyperedge size, due to a lack of study about the potential benefits of large hyperedges and effective algorithms to generate them.
    ∗ What's the answer?-Here, we establish that the large hyperedges are better from both theoretical and empirical standpoints.
    ∗ What's the approach?-We propose a novel guided sampling strategy for large hyperedges, based on the concept of random cluster models. Our method can generate pure large hyperedges that significantly improve grouping accuracy without exponential increase in sampling costs.
    ∗ What is the implication of the answer?-For higher order clustering always practice large hyperedges and use a guided sampling method to sample them.
    Globally optimal maximum consensus using ASTAR:
    ∗ Why is this problem so important in 3D vision? - Maximum consensus is arguably the criterion of choice for robust estimation tasks in computer vision. Despite its widespread use, optimizing the criterion is still customarily done by randomized sample-and-test techniques.
    ∗ What's the solution we propose? - We aim to change this state of affairs by proposing a very efficient algorithm for global maximization of consensus. Under the framework of LP-type methods, we show how consensus maximization for a wide variety of vision tasks can be posed as a tree search problem.
    ∗ What's the impact on literature? - By returning globally optimal estimates within tens of seconds on typical-sized problems, our approach provides a practical alternative to randomized techniques.


    Bio:

    Currently, he is a visiting scientist at ISI Kolkata. Presviously, he was a postdoctoral researcher at University of Adelaide, Australia (Sep'13- Feb'16). He will be joining Birmingham University, UK as a Research Associate from April 2016. He completed his M.Tech (2009) and PhD (2013) in CS from ISI Kolkata. While he pursued his B.Sc.in Mathematics (2005) and M.Sc. in Applied Mathematics (2007) from the University of Calcutta .
    Interests: His research interest includes Image processing, computer vision, 3D multiple-view geometry, and machine learning algorithms for 3D-geometric problems.



  • "Exploiting Symmetries in Statistical (Relational) Models for Efficient Inference"


    Speaker:

    Dr. Parag Singla

    Date:2016-03-11
    Time:15:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Several domains of interest including those in social network analysis, biology, vision, NLP and IE need to represent the underlying relational structure as well as model uncertainty. Statistical relational models such as Markov logic achieve this by combining the power of relational representations (e.g. first-order logic) with statistical models. To efficiently perform reasoning in these models, lifted inference exploits the underlying symmetry by grouping together objects (or states) which behave similar to each other, and hence, have the same associated probabilities. In this talk, we will examine two recent advances in exploiting symmetries for lifted inference : 1) Lifting in presence of constraints over the formulas in the theory 2) Lifting with contextual symmetries i.e. symmetries available only under specific assignments to a subset of variables. We will show that the proposed techniques significantly advance the state of the art both at a conceptual level as well as empirically. We will conclude the talk with some related work and frontiers in the future.


  • "Similarity of Weighted Tree/Graph Structures and Applications"


    Speaker:

    Prof Virendra Bhavsar from University of New Brunswick

    Date:2016-02-25
    Time:16:30:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    The notion of similarity (or matching) has played a very important role from the beginnings of artificial intelligence. The matching process, depending on the application, may involve keyword matching (e.g. Google), schema matching, taxonomic similarity and ontology matching, or other types of similarities. Clustering forms one of the basic computations in many Big Data applications and it is based on the concept of similarity (or distance).

    We have developed novel weighted tree/graph similarity algorithms applicable to many domains, e.g. bioinformatics, e-Business, e-Health, e-Learning and semantic web. We have also carried out high performance implementations of these algorithms on cluster and graphics processing units (GPUs). This talk will present an overview of the algorithms and their applications.

    Dr. Virendrakumar C. Bhavsar is an Honorary Research Professor, the founding Director of the Advanced Computational Laboratory, and former Professor as well as Dean of the Faculty of Computer Science at the University of New Brunswick. He is a Fellow of the International Institute of Cognitive Informatics and Cognitive Computing. He received the B.Eng. (Electronics and Telecommunications) from University of Poona, India, and the M.Tech. (Electrical Eng.) as well as Ph.D. (Electrical Eng.) degrees from the Indian Institute of Technology, Bombay. He was a faculty member at the Department of Computer Science and Engineering, Indian Institute of Technology, Bombay, 1974-83. Since 1983 he has been at the University of New Brunswick, Canada.

    He was a founding member, Board of Directors, C3.ca, Inc. – a high performance computing initiative in Canada. He is President of the SAI Super and Intelligent Computer Systems, Inc., Fredericton. He has been a consultant to industries in the areas of semantic web, semantic search and matchmaking, artificial intelligence and high performance computing. He has published more than 160 research papers in journals, conference proceedings, as well as book chapters, and has edited four volumes. He co-led the bioinformatics component of the Canadian Potato Genomics project. Recently, he led the Province of New Brunswick component of the Atlantic Computational Excellence Network (ACEnet) – a + million high performance computing initiative in Atlantic Canada. Current areas of research include intelligent systems with applications in e-Business and e-Learning, semantic search and semantic matchmaking, natural language processing and deep learning.

     


  • Low Distortion Inter-surface Mapping via Alternating Convex Optimizations


    Speaker:

    Manish Mandad

    Date:2016-02-18
    Time:16:00:00 (IST)
    Venue:404, Bharti Building
    Abstract:

    We present a novel approach to creating a homeomorphic map between two discrete surfaces. While most previous approaches compose maps over intermediate domains, we directly optimize a map between two surfaces by computing a variance-minimizing mass transport plan. We solve this non-linear problem, which is akin to minimizing the Dirichlet energy of both the map and its inverse, using two alternating convex optimization problems for which efficient numerical algorithms exist. A coarse-to-fine approach based on fast evaluation of geodesic distances through diffusion distances is then employed to provide a robust and efficient inter-surface mapping algorithm that naturally finds correspondences between shapes, with little to no user interaction.


    Bio:

    Manish Mandad is a Phd student at INRIA Sophia Antipolis working with Dr. Pierre Alliez. He will soon start his post-doc in Prof. Leif Kobbelt's group at RWTH Aachen. He is an IIT Delhi alumnus (B.Tech in comp sc. 2005-09).



  • Airports and Railways: Facility Location Meets Network Design


    Speaker:

    Tobias Moemke, University of Saarlandes

    Date:2016-02-10
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    We introduce a new framework of Airport and Railway Problems, which combines capacitated facility location with network design. In this framework we are given a graph with weights on the vertices and on the edges, together with a parameter k. The vertices of the graph represent cities, and weights denote respectively the costs of opening airports in the cities and building railways that connect pairs of cities. The parameter k can be thought of as the capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting the cities, where each connected component in the network spans at most k vertices, contains an open airport, and the network satisfies some additional requirements specific to the problem in the framework.

    We consider two problems in this framework. In the AR_F problem there are no additional requirements for the network. This problem is related to capacitated facility location. In the AR_P problem, we require each component to be a path with airports at both endpoints. AR_P is a relaxation of the capacitated vehicle routing problem (CVRP).

    We consider the problems in the two-dimensional Euclidean setting. We show that both AR_F and AR_P are NP-hard, even for uniform vertex weights (i.e., when the cost of building an airport is the same for all cities). On the positive side, we provide polynomial time approximation schemes for AR_F and AR_P when vertex weights are uniform. We also investigate AR_F and AR_P for k = infinity. In this setting we present an exact polynomial time algorithm for arf with general vertex costs, which also works for general edge costs. In contrast to AR_F, AR_P remains NP-hard when k=infinity, and we present a polynomial time approximation scheme for general vertex weights.


  • On approximating strip packing with a better ratio than 3/2


    Speaker:

    Andreas Wiese, MPI-Informatik

    Date:2016-02-10
    Time:16:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In the strip packing problem we are given a set of rectangular items that we want to place in a strip of given width such that we minimize the height of the obtained packing. It is a very classical two-dimensional packing problem that has received a lot of attention and it has applications in many settings such as stock-cutting and scheduling. A straight-forward reduction from Partition shows that the problem cannot be approximated with a better absolute factor than 3/2. However, this reduction requires the numeric values to be exponentially large. In this paper, we present a (1.4+ε)-approximation algorithm with pseudo-polynomial running time. This implies that for polynomially bounded input data the problem can be approximated with a strictly better ratio than for exponential input which is a very rare phenomenon in combinatorial optimization. 


  • "Facility Location in dynamic networks"


    Speaker:

    Professor Binay Bhattacharya, Simon Fraser University

    Date:2016-02-09
    Time:12:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    This talk discusses the problem of locating facilities in dynamic networks. In dynamic flow models it takes time for the flow to pass an arc, the flow can be delayed at nodes. An edge’s capacity represents the amount of items that can enter the edge in a unit time. Dynamic flow problems require moving a set of items from source nodes to sink nodes in minimal time. Congestion occurs at a vertex when more items start waiting at the vertex to enter an edge than can immediately leave the vertex. Finding a quickest dynamic flow usually requires designing flows that avoid (too much) congestion.
    These problems have been motivated by the problem of evacuation planning during natural disasters such as earthquakes, weather disturbances. Each vertex of a dynamic flow network represents a source with a certain number of people. The objective of the planning is to determine an evacuation schedule to move everybody from all the source nodes to evacuation centers (sinks) using minimal time. The evacuation problem can be viewed as a generalization of classical k-center and k-median problems.


  • Improving Network Security Through Insurance - A Tale of Cyber-Insurance Markets


    Speaker:

    Ranjan Pal

    Date:2016-01-29
    Time:16:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    In recent years, security researchers have well established the fact that technical security solutions alone will not result in a robust cyberspace due to several issues jointly related to the economics and technology of computer security. In this regard some of them proposed cyber-insurance to be a suitable risk management technique that has the potential to jointly align with the various incentives of security vendors (e.g., Symantec, Microsoft, etc.), cyber-insurers (e.g., security vendors, ISPs, cloud providers, etc.), regulatory agencies (e.g., government), and network users (individuals and organizations), in turn paving the way for robust cyber-security. In this talk, we ask the following important question: can cyber-insurance really improve the security in a network? To answer our question we adopt a market-based approach. We analyze regulated monopolistic, and competitive cyber-insurance markets in our work, where the market elements consist of risk-averse cyber-insurers, risk-averse network users, a regulatory agency, and security vendors (SVs). Our analysis proves that technical solutions will alone not result in optimal network security, and leads to two important results: (i) without contract discrimination amongst users, there always exists a unique market equilibrium for both market types, but the equilibrium is inefficient and does not improve network security, and (ii) in monopoly markets, contract discrimination amongst users results in a unique market equilibrium that is efficient and results in improvement of network security - however, the cyber-insurer can make zero expected profit. The latter fact is often sufficient to de-incentivize the formation or practical realization of successful and stable cyber-insurance markets. To alleviate this problem, we suggest SVs to enter a symbiotic relationship with insurers and lock the latter's clients via a fair, (topology, investment) dependent, and differentiated pricing mechanism, for the security products manufactured by the SVs. In return, for an observed increase in profit, the SVs will share a portion of their profit increase with the cyber-insurers. We mathematically study various pricing mechanisms designed via a common Stackelberg game framework, for a monopoly market setting. Among them, the optimal pricing problem in the binary pricing setting turns out to be NP-hard, for which we develop an efficient randomized approximation algorithm that achieves a pricing very close to the optimal. Our game analysis, combined with market simulation results on practical networking topologies for both monopoly and oligopoly markets illustrate the potential for cyber-insurers to have market success.


    Bio:

    Ranjan Pal is a Research Scientist at the University of Southern California (USC), affiliated with both the Electrical Engineering and Computer Science departments, where he co-leads the Quantitative Evaluation and Design Group (QED) with Professor Leana Golubchik. He received his PhD in Computer Science from USC in 2014, and was the recipient of the Provost Fellowship throughout his PhD studies. During his PhD, Ranjan held visiting scholar positions at the School of Engineering and Applied Science, Princeton University, USA, and Deutsch Telekom Research Laboratories (T-Labs), Berlin, Germany. His primary research interests lie in the performance modeling, analysis, and design of cyber-security, privacy, communication networks, and the Smart Grid, using tools from economics, game theory, applied probability, algorithms, and mathematical optimization. His research has generated press interests from USC News, and MIT Technology Review. Ranjan has also consulted for Accel Partners.



  • The complexity of expansion problems


    Speaker:

     L. Anand, Princeton

    Date:2016-01-25
    Time:16:00:00 (IST)
    Venue:Bharti Building
    Abstract:

    Graph-partitioning problems are a central topic of research in the study of algorithms and complexity theory. They are of interest to theoreticians with connections to error correcting codes, sampling algorithms, metric embeddings, among others, and to practitioners, as algorithms for graph partitioning can be used as fundamental building blocks in many applications. One of the central problems studied in this field is the sparsest cut problem, where we want to compute the cut which has the least ratio of number of edges cut to size of smaller side of the cut. This ratio is known as the expansion of the cut.
    In this talk, I will discuss three notions of expansion - edge expansion in graphs, vertex expansion in graphs and edge expansion in hypergraphs. I will define suitable notions of spectra and show how the notion of the celebrated "Cheeger's Inequality" goes across these three problems. I will also talk about higher order variants of these notions of expansion (i.e. notions of expansion corresponding to partitioning the graph/hypergraph into more than two pieces, etc.), and also about approximation algorithms for these problems.


  • From Vision-Realistic Rendering to Vision Correcting Displays


    Speaker:

    Professor Brian A. Barsky, 

    Computer Science Division
    School of Optometry
    Berkeley Center for New Media
    Berkeley Institute of Design
    Arts Research Center
    University of California, Berkeley

    Date:2016-01-22
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Present research on simulating human vision and on vision correcting displays that compensate for the optical aberrations in the viewer's eyes will be discussed. The simulation is not an abstract model but incorporates real measurements of a particular individual's entire optical system. In its simplest form, these measurements can be the individual's eyeglasses prescription; beyond that, more detailed measurements can be obtained using an instrument that captures the individual's wavefront aberrations. Using these measurements, synthetics images are generated. This process modifies input images to simulate the appearance of the scene for the individual. Examples will be shown of simulations using data measured from individuals with high myopia (near-sightedness), astigmatism, and keratoconus, as well as simulations based on measurements obtained before and after corneal refractive (LASIK) surgery.

    Recent work on vision-correcting displays will also be discussed. Given the measurements of the optical aberrations of a user's eye, a vision correcting display will present a transformed image that when viewed by this individual will appear in sharp focus. This could impact computer monitors, laptops, tablets, and mobile phones. Vision correction could be provided in some cases where spectacles are ineffective. One of the potential applications of possible interest is a heads-up display that would enable a driver or pilot to read the instruments and gauges with his or her lens still focused for the far distance.


    Bio:

    Brian A. Barsky is Professor of Computer Science and Vision Science, and Affiliate Professor of Optometry, at the University of California at Berkeley, USA. He is also a member of the Joint Graduate Group in Bioengineering, an interdisciplinary and inter-campus program, between UC Berkeley and UC San Francisco, and a Fellow of the American Academy of Optometry (F.A.A.O.). Professor Barsky has co-authored technical articles in the broad areas of computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation. He is also a co-author of the book An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, co-editor of the book Making Them Move: Mechanics, Control, and Animation of Articulated Figures, and author of the book Computer Graphics and Geometric Modeling Using Beta-splines. Professor Barsky also held visiting positions in numerous universities of European and Asian countries. He is also a speaker at many international meetings, an editor for technical journal and book series in computer graphics and geometric modelling, and a recipient of an IBM Faculty Development Award and a National Science Foundation Presidential Young Investigator Award. Further information about Professor Barsky can be found at http://www.cs.berkeley.edu/~barsky/biog.html.



  • Test


    Speaker:

    Test

    Date:2016-01-21
    Time:12:02:00 (IST)
    Venue:Seminar Hall
    Abstract:

    Test


    Bio:

    Test



  • Monitoring Big, Distributed, Streaming Data


    Speaker:

    Prof. Assaf Schuster

    Date:2016-01-19
    Time:16:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    More and more tasks require efficient processing of continuous queries over scalable, distributed data streams. Examples include optimizing systems using their operational log history, mining sentiments using sets of crawlers, and data fusion over heterogeneous sensor networks. However, distributed mining and/or monitoring of global behaviors can be prohibitively difficult. The naïve solution which sends all data to a central location mandates extremely high communication volume, thus incurring unbearable overheads in terms of resources and energy. Furthermore, such solutions require expensive powerful central platform, while data transmission may violate privacy rules. An attempt to enhance the naïve solution by periodically polling aggregates is bound to fail, exposing a vicious tradeoff between communication and latency. Given a continuous global query, the solution proposed in the talk is to generate filters, called safe zones, to be applied locally at each data stream. Essentially, the safe zones represent geometric constraints which, until violated by at least one of the sources, guarantee that a global property holds. In other words, the safe zones allow for constructive quiescence: There is no need for any of the data sources to transmit anything as long as all constraints are held with the local data confined to the local safe zone. The typically-rare violations are handled immediately, thus the latency for discovering global conditions is negligible. The safe zones approach makes the overall system implementation, as well as its operation, much simpler and cheaper. The saving, in terms of communication volume, can reach many orders of magnitude. The talk will describe a general approach for compiling efficient safe zones for many tasks and system configurations.


    Bio:

    Prof. Assaf Schuster of the Computer Science Department at the Technion is an ACM fellow and a world leading expert of distributed and scalable data Mining, Big Data technologies analytics & prediction, Cyber security and system vulnerabilities, privacy preserving, cloud resource management and more. Prof. Schuster published more than 200 papers in highly selective conferences and journals, some of which won prestigious awards. He consulted leading hi-tech companies, such as IBM, HP, Microsoft, and Verint. He participated in the bumpy journey of quite a few startups, some of which were successful. His research group is well known for its contributions to the field of big data and scalable, real-time knowledge discovery in distributed data streams.

     



  • Architecting Next-Generation Mobile Platforms


    Speaker:

    Prof. Chita Das

    Date:2016-01-15
    Time:11:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    The demand for feature-rich mobile systems such as wearables, smartphones and tablets have already outpaced the growth of traditional computing platforms. It is projected that SoCs with tens of cores and hundreds of IPs/accelerator will be designed to provide unprecedented level of features and functionality in future. Design of such mobile systems with required performance and power budgets along with other design constraints will be a daunting task for computer architects since any ad hoc, piece-meal solution is unlikely to result in an optimal design. This requires holistic exploration of the complete design space to understand the system-level design trade-offs.

    In this talk, I will discuss several research challenges and some of our ongoing efforts to address these issues. I will start with a comprehensive mobile system simulation platform, called GemDroid, for facilitating the execution of mobile applications and analyzing the performance and energy behavior of CPU, memory, and IPs collectively. Next, we will discuss the design of a heterogeneous memory controller and a sub-framing mechanism to enhance memory system performance. Finally, the scope for performance and power optimizations in the context of multiple applications will be outlined. The talk will conclude with pointers to some future research directions in this emerging area.


    Bio:

    Chita Das is a Distinguished Professor of Computer Science and Engineering at the Pennsylvania State University. His main areas of interest include CMPs and many-core architectures, GPUs, Clouds/datacenters, mobile platforms, performance evaluation, and fault-tolerant computing. In particular, he has worked extensively in the area of design and analysis of interconnection networks/on-chip interconnects. He has published over 200 papers in the above areas, has received several best paper awards, and has served on many program committees, and editorial boards. He is a Fellow of the IEEE.



  • Static and Dynamic Reasoning for SDNs NetPL


    Speaker:

    Prof. Shriram Krishnamurthi

    Date:2016-01-13
    Time:11:00:00 (IST)
    Venue:CSE Seminar Room Bharti Building 501
    Abstract:

    Software-Defined Networking enables new techniques for reasoning about network behaviour. Moreover, controller programs themselves are amenable to reasoning, independent of the network they control. This talk presents some recent work in both static and dynamic reasoning for SDNs and lays out a landscape for thinking about these issues.


    Bio:

    Shriram Krishnamurthi is a computer scientist, currently a professor of computer science at Brown University. and a member of the core development group for the Racket programming languages,] responsible for the creation of software packages including the Debugger, the FrTime package, and the networking library.
    Krishnamurthi received his PhD at Rice University in 2000, under the direction of Matthias Felleisen] His dissertation is on linguistic reuse and macro systems in the presence of first-class modules. Starting from this topic, Krishnamurthi has moved into software engineering and is working on topics such as access control, modularization of verification, web-based interactive programming, and more. His most recent effort is a novel, time-oriented programming language, called Flapjax, in support of asynchronous web programming. Krishnamurthi also authored a textbook on programming language design
    In 2012, Krishnamurthi became the inaugural winner of the Robin Milner Young Researcher Award, given by SIGPLAN to a researcher whose research career began within 20 years of the nomination date. The award citation describes Krishnamurthi as "a prolific researcher who brings programming language theory to bear in many other disciplines, thus exposing its foundational value"



  • A Markovian Approach to Modeling Preferences and Assortment Optimization


    Speaker:

    Vineet Goyal, Columbia U.

    Date:Fri, 8th Jan, 2016, 4PM
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Consider a seller with n (substitutable) products with fixed prices. Each consumer has a random preference (a permutation over products) and selects the most preferred product among the ones offered by the seller. The seller needs to decide on the subset of products to offer such that expected revenue over random preferences is maximized. This problem is referred to as the assortment planning problem in the literature and arises in many applications including retailing, recommendations and online advertising. One of the fundamental challenges in these problems is to identify a ``good'' model for consumer preferences and substitution behavior especially since the preferences are latent and not observable.
    We consider a Markovian approach to address the model selection problem where substitutions are modeled as state transitions in a Markov chain. We show that this model gives a simultaneous approximation for a large family of parametric preference models considered in the literature. Furthermore, the assortment optimization under this model can be solved efficiently. We present an iterative algorithm based on a ``local-ratio'' framework that gives an optimal for the unconstrained problem and a constant approximation for the capacity constrained version. The local-ratio framework allows us to linearize a non-linear revenue function and also provide interesting insights for the assortment optimization problem over other preference models.


  • Mining Communication Motifs in Dynamic Networks


    Speaker:

    Sayan Ranu, IIT Madras

    Date:Mon, 4th Jan, 2016
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    A fundamental problem in behavioral analysis of human interactions is to understand how communications unfold. In this talk, I will analyze this problem by mining communication motifs from dynamic interaction networks. A communication motif is a recurring subgraph that has a similar sequence of information flow. Mining communication motifs requires us to explore the exponential subgraph search space where existing techniques fail to scale. To tackle this scalability bottleneck, I will discuss a technique called COMMIT. COMMIT converts a dynamic graph into a database of sequences. Through careful analysis in the sequence space, only a small portion of the exponential search space is accessed to identify regions embedding communication motifs. Extensive experiments on three different social networks show COMMIT to be up to two orders of magnitude faster than baseline techniques. Furthermore, qualitative analysis demonstrates communication motifs to be effective in characterizing the recurring patterns of interactions while also revealing the role that the underlying social network plays in shaping human behavior.



2016 talks

  • What is in a Name. An Access Based Abstraction for Points to Analysis


    Speaker:

    Vini Kanvar 

    Date:2016-12-27
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    A points-to analysis computes a sound abstraction of memory using a name-based abstraction that groups concrete memory locations using names: All stack/static locations that are accessed using the same variable name are grouped together across all execution instances of the program and so are all heap locations allocated by the same statement. Since this grouping is fixed across all program points (even in a flow-sensitive analysis), it merges pointer relationships indiscriminately making the heap abstraction significantly imprecise.

    We propose an access-based abstraction that sub-divides each name-based group into subgroups using an additional criterion of the sets of access paths reaching the locations. The intuition is that the memory locations that are both allocated and accessed alike should be treated alike and the locations that may be allocated or accessed differently should not be merged. Since the ways of accessing locations could change flow sensitively because of the assignment statements in a program, the subgroups change flow sensitively. This creates a more precise view of the memory. Theoretically, it is at least as precise as the name-based abstraction when no name-based group can be subdivided into smaller subgroups; practically it is far more precise.

    Our empirical measurements show the benefits of access-based abstraction on SPEC CPU 2006 and heap manipulating SV-COMP benchmarks. Our prototype implementation scales to 20.6 kLoC and can improve the precision even up to 99% (in terms of the average number of aliases of an access path).


    Bio:

    Vini Kanvar is a senior PhD student working with Prof. Uday Khedker in the areas of program analysis. 



  • On the Single-Pass Streaming Complexity of the Set Cover Problem


    Speaker:

    Sanjeev Khanna, U. Penn

    Date:2016-12-20
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In the set cover problem, we are given a collection of sets over a universe of elements, and the goal is to find a sub-collection of these sets whose union covers the universe. The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. In this talk, we investigate the set cover problem in the streaming model of computation whereby the sets are presented one by one in a stream, and we wish to find an approximate set cover or simply estimate its size using a space-efficient algorithm.

    We give a tight resolution of the tradeoff between the space available to the algorithm and the quality of the set cover approximation. Our results also extend to the more general problem of estimating the optimal value of a covering integer program presented in a stream.

    This is joint work with my students Sepehr Assadi and Yang Li.


  • Synthesizing Heap Manipulations via Integer Linear Programming


    Speaker:

    Prof. Subhajit Roy

    Date:2016-12-15
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Writing heap manipulating programs is hard. Even though the high-level algorithms may be simple, it is often tedious to express them using low-level operations. We propose an algorithm that uses expression of intent in the form of concrete examples drawn using box-and-arrow diagrams to synthesize heap-manipulations automatically. Instead of modeling the concrete examples in a monolithic manner, we extract a set of patterns of manipulation that can be applied repeatedly to construct such programs. We, then, attempt to infer these patterns as linear transformations, leveraging the power of ILP solvers for program synthesis. We have successfully applied our algorithm to synthesize data-structure manipulations on singly and doubly linked-lists, AVL trees and binary search trees.


  • Empathic Human-Computer Interaction


    Speaker:

    Arindam dey, University of South Australia

    Date:2016-12-05
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Computer interfaces are currently capable of responding to instructions provided by the user. However, most of these interfaces do not receive or respond to the emotional needs of the user, which in many cases users themselves are also not fully aware of. Empathic human-computer interaction (HCI) is a field of HCI where human emotions are measured and used to change the state of the user interface in real time. In collaborative setups, the emotions can also be shared between the collaborators to make them aware of the emotional states of each other. Augmented and virtual realities are three-dimensional user interface technologies that can provide immersive and interactive experiences by either augmenting or completely replacing the real world with virtual information. These augmented and virtual reality interfaces, along with other two-dimensional interfaces, can be designed to provide empathic experiences. In this talk, I will present my past and present research using the above technologies along with potential future research directions.


  • The Approximations Vs. Abstractions Dilemma in Pointer Analysis


    Speaker:

    Prof. Uday Khedkar, CSE IIT Bombay

    Date:2016-11-24
    Time:11:00:00 (IST)
    Venue:IIA-501 Bharti Building
    Abstract:

    The world  of programming  is divided in  religious camps  with opposing
    views on the use of pointers.  The verification community as well as HPC
    community  abhors  pointers  while  the  hackers  adore  them. Compiler
    writers love the  convenience that pointers provide to  build fancy data
    structures but hate the difficulties that  arise in trying to optimize a
    program containing pointers.

    There has been a lot of  work on analyzing programs containing pointers.
    The mathematics  of pointer analysis  has been studied in  great details
    and  the findings  conclusively  say that  like  many other interesting
    problems, the analysis of pointers in  its full glory is undecidable and
    remains intractable even if some imprecision were to be tolerated. Given
    the vital importance of pointer  analysis and the inherent difficulty of
    performing  precise pointer  analysis  for practical  programs, a large
    fraction  of  pointer  analysis  community  has  come  to believe  that
    compromising on  precision is necessary for  scalability and efficiency.
    This  is  evidenced  by  the  fact  that  a  large  number  of reported
    investigations  in  pointer analysis  involve  a  significant amount  of
    engineering approximations.

    We find it hard to accept this assumption as the final inevitability. We
    believe that  a lot could  be gained by  exploring a science  of pointer
    analysis  that  tries to  build  clean  abstractions rather  than hasty
    approximations. In our opinion, this is a road less travelled in pointer
    analysis. Without undermining the engineering efforts, we propose that a
    search  for  approximations  should  begin  only  after  building clean
    abstractions and not  before it. The talk describes our  efforts in this
    direction. 


  • The Approximations Vs. Abstractions Dilemma in Pointer Analysis


    Speaker:

    Prof. Uday Khedkar, CSE IIT Bombay

    Date:2016-11-24
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The world of programming is divided in religious camps with opposing views on the use of pointers. The verification community as well as HPC community abhors pointers while the hackers adore them. Compiler writers love the convenience that pointers provide to build fancy data structures but hate the difficulties that arise in trying to optimize a program containing pointers.

    There has been a lot of work on analyzing programs containing pointers. The mathematics of pointer analysis has been studied in great details and the findings conclusively say that like many other interesting problems, the analysis of pointers in its full glory is undecidable and remains intractable even if some  imprecision were to be tolerated. Given the vital importance of pointer analysis and the inherent difficulty of performing precise pointer analysis for practical programs, a large fraction of pointer analysis community has come to believe that compromising on precision is necessary for scalability and efficiency.
    This is evidenced by the fact that a large number of reported investigations in pointer analysis involve a significant amount of engineering approximations.

    We find it hard to accept this assumption as the final inevitability. We believe that a lot could be gained by exploring a science of pointer analysis that tries to build clean abstractions rather than hasty approximations. In our opinion, this is a road less travelled in pointer analysis. Without undermining the engineering efforts, we propose that a search for approximations should begin only after building clean abstractions and not before it. The talk describes our efforts in this
    direction.


  • Targeted Client Synthesis to Detect Concurrency Bugs


    Speaker:
    Dr. Murali Krishna Ramanathan
    Date:2016-11-22
    Time:11:00:00 (IST)
    Venue:SIT 001
    Abstract:
    Detecting concurrency bugs can be challenging due to the
    intricacies associated with their manifestation. These intricacies
    correspond to identifying the methods that need to be invoked
    concurrently, the inputs passed to these methods and the interleaving
    of the threads that cause the erroneous behavior. Neither
    fuzzing-based testing techniques nor over-approximate static analyses
    are well positioned to detect subtle concurrency defects while
    retaining high accuracy alongside satisfactory coverage. While dynamic
    analysis techniques have been proposed to overcome some of the
    challenges in detecting concurrency bugs, we observe that their
    success is critically dependent on the availability of effective
    multithreaded clients. Without a priori knowledge of the defects,
    manually constructing defect-revealing multithreaded clients is
    non-trivial.
    
    In this talk, I will present our approach that addresses the problem
    of generating effective multithreaded clients.  Our technique analyzes
    the execution traces obtained from executing random sequential clients
    and produces a concurrent client program that drives shared objects
    via library methods calls to states conducive for triggering
    deadlocks, data races, atomicity violations or assertion violations.
    We have implemented prototypes that incorporate the aforementioned
    ideas and applied them on 29 well-tested concurrent classes from
    popular Java libraries, including the latest version of JDK. We are
    able to automatically synthesize clients that helped expose more than
    300 concurrency bugs. We reported many previously unknown bugs to the
    developers of these libraries resulting in either fixes to the code or
    changes to the documentation pertaining to the thread-safe behavior of
    the relevant classes.

    Bio:
    Murali Krishna Ramanathan is an Assistant Professor in the Department
    of Computer Science and Automation, IISc, Bangalore. His research
    interests are in programming languages and software engineering.
    Previously, he was a member of the analysis team at Coverity and
    helped build bug-detection tools that are widely used in the software
    industry. He received his PhD in Computer Science from Purdue
    University, USA.


  • Targeted Client Synthesis to Detect Concurrency Bugs


    Speaker:

    Dr. Murali Krishna Ramanathan

    Date:2016-11-22
    Time:11:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Detecting concurrency bugs can be challenging due to the intricacies associated with their manifestation. These intricacies correspond to identifying the methods that need to be invoked concurrently, the inputs passed to these methods and the interleaving of the  threads that cause the erroneous behavior. Neither fuzzing-based testing techniques nor over-approximate static analyses are well positioned to detect subtle concurrency defects while retaining high accuracy alongside satisfactory coverage. While dynamic analysis techniques have been proposed to overcome some of the challenges in detecting concurrency bugs, we observe that their success is critically dependent on the availability of effective multithreaded clients. Without a priori knowledge of the defects, manually constructing defect-revealing multithreaded clients is non-trivial.

    In this talk, I will present our approach that addresses the problem of generating effective multithreaded clients. Our technique analyzes the execution traces obtained from executing random sequential clients and produces a concurrent client program that drives shared objects via library methods calls to states conducive for triggering deadlocks, data races, atomicity violations or assertion violations. We have implemented prototypes that incorporate the aforementioned ideas and applied them on 29 well-tested concurrent classes from popular Java libraries, including the latest version of JDK. We are able to automatically synthesize clients that helped expose more than 300 concurrency bugs. We reported many previously unknown bugs to the developers of these libraries resulting in either fixes to the code or changes to the documentation pertaining to the thread-safe behavior of the relevant classes.


    Bio:

    Murali Krishna Ramanathan is an Assistant Professor in the Department of Computer Science and Automation, IISc, Bangalore. His research interests are in programming languages and software engineering. Previously, he was a member of the analysis team at Coverity and helped build bug-detection tools that are widely used in the software industry. He received his PhD in Computer Science from Purdue University, USA.



  • Brain-Computer Interfaces: Recent Advances and Future Applications


    Speaker:

    Rajesh P. N. Rao
    Director, Center for Sensorimotor Neural Engineering
    Department of Computer Science and Engineering
    University of Washington, Seattle, USA

    Date:2016-11-18
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Ideas such as telepathy and telekinesis have long been a staple of science fiction stories and movies. Recent advances in the field of brain-computer interfaces (BCIs) are bringing these ideas closer to reality. Recent demonstrations have ranged from direct brain control of cursors, robots, and prosthetic arms to direct brain-to-brain communication. BCIs are enabling communication in locked-in patients, rehabilitation and restoration of movement in paralyzed and disabled individuals, and are increasingly being used in novel applications such as security, lie detection, alertness monitoring, entertainment, gaming, education, and human augmentation. This talk will provide an overview of BCIs and recent developments in the field, with highlights from BCI research at the University of Washington and the Center for Sensorimotor Neural Engineering.


    Bio:

    Rajesh P. N. Rao is the Director of the NSF Center for Sensorimotor Neural Engineering and Professor of Computer Science and Engineering at the University of Washington. He is the recipient of a Guggenheim Fellowship, a Fulbright Scholar award, an NSF CAREER award, a Young Investigator Award from the Office of Naval Research, a Sloan Faculty Fellowship, and a Packard Fellowship for Science and Engineering. He is the author of the book Brain-Computer Interfacing (Cambridge University Press, 2013) and the co-editor of two volumes, Probabilistic Models of the Brain (MIT Press, 2002) and Bayesian Brain (MIT Press, 2007). His research spans the areas of computational neuroscience, AI, and brain-computer interfacing. Prof. Rao's group was the first to demonstrate direct brain control of a humanoid robot in 2007 and direct brain-to-brain communication in 2013. With Prof. Adrienne Fairhall, he offered the first MOOC (massively open online course) in computational neuroscience on Coursera. Prof. Rao's other interests include classical Indian paintings and the 4000-year-old undeciphered Indus script, a topic on which he has given a TED talk.



  • Incremental Program Obfuscation


    Speaker:

    Dr. Omkant Pandey (Stony Brook University)

    Date:2016-11-09
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is *incrementality*: small changes to the underlying program translate into small changes to the corresponding obfuscated program.

    In this talk, I will briefly discuss the notion of *incremental program obfuscation.* I will explain various notions of security for incremental obfuscation, its limits, and what is possible to achieve for the indistinguishability based notions.

    Towards the end, I will spend some time discussing the Ph.D. program at Stony Brook University.


    Bio:

    Omkant Pandey is an assistant professor at Stony Brook University. Previously, he was a senior researcher at University of California Berkeley. He is interested in the broad areas of secure computation and next-generation encryption systems. He is the winner of 2016 ACM CCS Test-of-Time Award for his work on Attribute Based Encryption. He received his Ph.D. from UCLA.



  • The Cognitive Era


    Speaker:

    Dr. Ravi Kothari (IBM-IRL)

    Date:2016-11-07
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The ability to capture, store and process increasingly large volumes of data makes it possible to gain deeper insights that can improve almost every aspect of our lives. In this talk we provide a broad overview of some of the exciting things that we are enabling for this new data-driven world.


    Bio:

    Ravi Kothari is an IBM Distinguished Engineer and the Chief Scientist of IBM Research - India



  • India Stack


    Speaker:

    Saurabh Panjwani

    Date:2016-10-31
    Time:15:00:00 (IST)
    Venue:SIT Seminar Room #113
    Abstract:

    India Stack​ is a suite of open technology standards being built by industry experts in India to ease product development in essential consumer-service verticals like finance and healthcare for Indian consumers. It is designed around India's national identity system, ​Aadhaar,​ currently being used in the deliverance of public utilities in the country. Based on Aadhaar, a series of technology standards for digital payments (the Universal payment interface (UPI)), data storage (the digital locker standard) and digital signatures are being developed as part of the India Stack. These standards will enable the development of applications which can deliver services efficiently to Indian citizens, minimizing the use of cash and paper, and enabling "presence-less" interaction with service providers. In this meeting, I will discuss research questions around India Stack and how academicians could get involved in the project.

     


  • QUADSEAL: A Hardware Countermeasure against Side channel Attacks on AES


    Speaker:

    Prof. Sri Parameswaran
    Embedded Systems Laboratory, School of Computer Science and Engineering
    University of New South Wales, Sydney

    Date:2016-10-24
    Time:11:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Deep devastation is felt when privacy is breached, personal information is lost, or property is stolen. Now imagine when all of this happens at once, and the victim is unaware of its occurrence until much later. This is the reality, as increasing amount of electronic devices are used as keys, wallets and files. Security attacks targeting embedded systems illegally gain access to information or destroy information. Advanced Encryption Standard (AES) is used to protect many of these embedded systems. While mathematically shown to be quite secure, it is now well known that AES circuits and software implementations are vulnerable to side channel attacks. Side-channel attacks are performed by observing properties of the system (such as power consumption, electromagnetic emission, etc.) while the system performs cryptographic operations. In this talk, differing power based attacks are described, and various countermeasures are explained. In particular, a countermeasure titled Algorithmic Balancing is described in detail. Implementation of this countermeasure in hardware and software is described. Since process variation impairs countermeasures, we show how this countermeasure can be made to overcome process variations.


    Bio:

    Sri Parameswaran is a Professor in the School of Computer Science and Engineering at the University of New South Wales. He also serves as the Postgraduate Research and Scholarships coordinator at the same school. Prof. Parameswaran received his B. Eng. Degree from Monash University and his Ph.D. from the University of Queensland in Australia. He has held visiting appointments at University of California, Kyushu University and Australian National University. He has also worked as a consultant to the NEC Research laboratories at Princeton, USA and to the Asian Development Bank in Philippines. His research interests are in System Level Synthesis, Low power systems, High Level Systems, Network on Chips and Secure and Reliable Processor Architectures. He is the Editor-in-Chief of IEEE Embedded systems Letters. He serves or has served on the editorial boards of IEEE Transactions on Computer Aided Design, ACM Transactions on Embedded Computing Systems, the EURASIP Journal on Embedded Systems and the Design Automation of Embedded Systems. He has served on the Program Committees of Design Automation Conference (DAC), Design and Test in Europe (DATE), the International Conference on Computer Aided Design (ICCAD), the International Conference on Hardware/Software Codesign and System Synthesis (CODES-ISSS), and the International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES).



  • Legal, Non-legal...and beyond


    Speaker:

    Kripabandhu Ghosh

    Date:2016-09-30
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Information Retrieval from Legal documents is an emerging research area. Legal IR has posed unprecedented challenges to the scientists. In this talk, the speaker will look to address some of the prominent problems in the legal domain (which he has explored during his PhD) along with their proposed solutions. In addition, the speaker will also discuss some important problems in information extraction from online social media texts (e.g., Tweets) during disaster situations.


    Bio:

    Kripabandhu Ghosh has recently defended his Ph.D. in Computer Science from Indian Statistical Institute, Kolkata. He has also been an International Scholar from September 2015 to January 2016 at KU Leuven, Belgium. His broad area of interest is Information Retrieval. More specifically, he is interested in Information Retrieval on Legal Domain, improving retrieval performance on Noisy Text in the absence of training data, optimization techniques on Data Fusion, Query Reweighting and Expansion, personalized advertisement built from web sources, Online Social Media text retrieval and analysis etc. He has been in the Organizing Committee / Program Committee of the Forum for Information Retrieval Evaluation (FIRE: http://fire.irsi.res.in/fire/2016/home) since 2011. He has co-organized FIRE 2013 “Information access in legal domain” track (http://www.isical.ac.in/~fire/2013/legal.html) and Personalised AdveRtisements buIlt from web Sources (PARIS) workshop (http://www.parisproject.be/workshop_pic/program_schedule.pdf). He is currently co-organizing FIRE “Information Extraction from Microblogs Posted during Disasters” track (https://sites.google.com/site/fire2016microblogtrack/information-extraction-from-microblogs-posted-during-disasters).



  • Learning, Games and Sample Complexity


    Speaker:

    Arnab Basu, IIM Bangalore

    Date:2016-09-29
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    We talk about some of the open problems investigated by this author spanning across learning, games and sample complexity. The machinery needed includes probability, geometry and topology, and dynamical systems. Though several applications to modern day data analysis can be naturally derived from the techniques discussed herein, the talk is mostly theoretical.


    Bio:

    Arnab Basu is a computer scientist and applied mathematician currently working as an Associate Professor of Decision Sciences and Information Systems at the Indian Institute of Management (IIM) Bangalore since 2010. He has also been the External Expert Consultant to Data Sciences Research for InMobi Technologies for a period of one year from April 2015 to March 2016. He holds a B.Tech(H) degree in CSE from IIT Kharagpur and a Ph.d in Computer and System Sciences from TIFR. His research interest is interdisciplinary with deep applications to large-scale data analysis, algorithms, machine learning, optimization and algorithmic economics. 



  • Breaking the Memory Wall in Multi-core Systems through Prefetch and Compression Engines


    Speaker:

    Biswabandan Panda, INRIA Rennes

    Date:2016-09-22
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Historically, the rate of improvement in memory speed is lower than the improvement in processor speed, which creates a gap between processor speed and memory speed. This inevitably leads to the "memory wall" where processor cores' pipelines stall, waiting for memory accesses. Moreover, the advent of multi-core systems in recent years, exacerbates the memory wall as multiple applications contend with each other for shared resources such as off-chip bandwidth.

    This talk will cover two hardware techniques in the form of prefetch engines and cache compression engines that try to break the memory wall. Prefetch engines speculatively bring data into the cache, converting cache misses into cache hits. Cache compression engines also increase the cache hit ratio by storing more data in less bytes, increasing the effective cache capacity. The talk will provide a brief introduction on prefetch engines (especially aggressiveness engines) and cache compressors followed by a detailed description of my recent research work, such as synergistic prefetcher aggressiveness controller and dictionary sharing based cache compression.


    Bio:

    Biswabandan Panda is a post-doctoral researcher at the PACAP team in INRIA, Rennes. He received his Ph.D. from Indian Institute of Technology Madras, where he was a recipient of TCS Ph.D. fellowship. His area of research includes hardware prefetching, cache/DRAM Compression, and memory sub-system management for multi-core systems.



  • Efficient Similarity Search across Top-k Lists under the Kendalls Tau Distance


    Speaker:

    Koninika Pal, TU Kaiserslautern

    Date:2016-09-16
    Time:12:00:00 (IST)
    Venue:SIT Building Room #113
    Abstract:

    We consider the problem of similarity search in a set of top-k lists under the generalized Kendall's Tau distance function. This distance describes how related two rankings are in terms of discordantly ordered items. We consider pair- and triplets-based indices to counter the shortcomings of naive inverted indices and derive efficient query schemes by relating the proposed index structures to the concept of locality sensitive hashing (LSH). Specifically, we devise four different LSH schemes for Kendall's Tau using two generic hash families over individual elements or pairs of them. We show that each of these functions has the desired property of being locality sensitive. Further, we discuss the selection of hash functions for the proposed LSH schemes for a given query ranking, called query-driven LSH and derive bounds for the required number of hash functions to use in order to achieve a predefined recall goal. Experimental results, using two real-world datasets, show that the devised methods outperform the prefix-filtering method, SimJoin method; the state of the art method to query for similar sets, and are far superior to a plain inverted-index-based approach.


    Bio:

    Koninika Pal is a fourth year PhD student at TU Kaiserslautern under the supervision of Sebastian Michel. Her research interests are in ranking, querying and performance study for similarity search in big data.



  • Interactive Data Science


    Speaker:

    Tim Kraska, Brown University, USA

    Date:2016-09-09
    Time:15:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Unleashing the full potential of Big Data requires a paradigm shift in the algorithms and tools used to analyze data towards more interactive systems with highly collaborative and visual interfaces. Ideally, a data scientist and a domain expert should be able to make discoveries together by directly manipulating, analyzing and visualizing data on the spot, instead of having week-long forth-and-back interactions between them. Current systems, such as traditional databases or more recent analytical frameworks like Hadoop or Spark, are ill-suited for this purpose. They were not designed to be interactive nor to support the special requirements of visual data exploration. Similarly, most machine learning algorithms are not able to provide initial answers at "human speed" (i.e., sub-seconds), nor are existing methods sufficient to convey the impact of the various risk factors, such as approximation errors or incompleteness within the data.

    In this talk, I will present our vision of a new approach for conducting interactive exploratory analytics and explain why integrating the aforementioned features requires a complete rethinking of the full analytics stack, from the interface to the "guts". I will present recent results towards this vision including our novel interface, analytical engine and index structure, and outline what challenges are still ahead of us.


    Bio:

    Tim Kraska is an Assistant Professor in the Computer Science department at Brown University. Currently, his research focuses on Big Data management systems for modern hardware and new types of workloads, especially interactive analytics. Before joining Brown, Tim spent 3 years as a PostDoc in the AMPLab at UC Berkeley, where he worked on hybrid human-machine database systems and cloud-scale data management systems. Tim received his PhD from the ETH Zurich under the supervision of Donald Kossmann. He was awarded an NSF Career Award (2015), an Airforce Young Investigator award (2015), a Swiss National Science Foundation Prospective Researcher Fellowship (2010), a DAAD Scholarship (2006), a University of Sydney Master of Information Technology Scholarship for outstanding achievement (2005), the University of Sydney Siemens Prize (2005), two VLDB best demo awards (2015 and 2011) and an ICDE best paper award (2013).



  • Some Perspectives in (Big) Social Data Exploration


    Speaker:

    Sihem Amer-Yahia, CNRS, France

    Date:2016-09-09
    Time:16:30:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    The web has evolved from a technology platform to a social milieu where factual, opinion and behavior data interleave. A number of social applications are being built to analyze and extract value from this data, encouraging us to adopt a data-driven approach to research. I will provide different formulations of social data exploration and describe recent work on extracting user segments on the social Web. More specifically, I will talk about extracting and exploring user segments on collaborative rating sites. No prior knowledge of this area is required.

    This talk is based on collaborations with colleagues at UT Austin, NJIT, University of British Columbia, and Google Research.


    Bio:

    Sihem Amer-Yahia is DR1 CNRS at LIG in Grenoble where she leads the SLIDE team. Her research interests are in social data management and crowdsourcing. She previously held research positions at QCRI, Yahoo! Research and AT&T Labs. Sihem served on the ACM SIGMOD Executive Board and the VLDB Endowment. She is Editor-in-Chief of the VLDB Journal and is on the editorial boards of TODS and Information Systems. She is currently chairing the VLDB Workshops 2016 and will chair VLDB 2018. Sihem completed her Ph.D. in Computer Science at INRIA in 1999, and her Diplôme d'Ingénieur at INI, Algeria.



  • Tracking the Conductance of Rapidly Evolving Topic-Subgraphs


    Speaker:

    Sainyam Galhotra, UMass, Amherst

    Date:2016-09-02
    Time:15:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    Monitoring the formation and evolution of communities in large online social networks such as Twitter is an important problem that has generated considerable interest in both industry and academia. Fundamentally, the problem can be cast as studying evolving sub- graphs (each subgraph corresponding to a topical community) on an underlying social graph - with users as nodes and the connection between them as edges. A key metric of interest in this setting is tracking the changes to the  conductance of subgraphs induced by edge activations. This metric quantifies how well or poorly con- nected a subgraph is to the rest of the graph relative to its internal connections. Conductance has been demonstrated to be of great use in many applications, such as identifying bursty topics, tracking the spread of rumors, and so on. However, tracking this simple metric presents a considerable scalability challenge - the underlying social network is large, the number of communities that are active at any moment is large, the rate at which these communities evolve is high, and moreover, we need to track conductance in real-time. We address these challenges in this paper. We propose an in-memory approximation called BloomGraphs to store and update these (possibly overlapping) evolving subgraphs. As the name suggests, we use Bloom filters to represent an approx- imation of the underlying graph. This representation is compact and computationally efficient to maintain in the presence of updates. This is especially important when we need to simultaneously main- tain thousands of evolving subgraphs. BloomGraphs are used in computing and tracking conductance of these subgraphs as edge- activations arrive. BloomGraphs have several desirable properties in the context of this application, including a small memory footprint and efficient updateability. We also demonstrate mathematically that the error incurred in computing conductance is one-sided and that in the case of evolving subgraphs the change in approximate conductance has the same sign as the change in exact conductance in most cases.


    Bio:

    Sainyam Galhotra is a Graduate student at University of Massachusetts (UMass), Amherst, USA. He spent his summer as a Research Intern @ Google Research, Mountain View. His research interests include graph analysis, data mining and causality analytics. Prior to pursuing graduate studies, he worked as a budding scientist at the Xerox Research Centre India (XRCI) where he contributed towards designing algorithms for real time analysis of problems pertaining to social media. He obtained his undergraduate degree in computer science from the Indian Institute of Technology (IIT), Delhi. During his undergraduate he worked with Prof. Amitabha Bagchi, Prof. Maya Ramanath and Dr. Srikanta Bedathur(IBM). His works have been published in top data mining and machine learning conferences. Sainyam can be contacted at sainyam@cs.umass.edu 



  • An Equivalence-Checker for Assembly Programs with Loops


    Speaker:

    Dr. Saurav Bansal, CSE Department

    Date:2016-08-31
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Could there be a compiler that outputs the "optimal" code for a program specification? The problem is undecidable in general, and efforts in this direction are often called 'superoptimizers' (like the superman). I will briefly discuss how superoptimizers work, and then discuss our efforts on integrating support for loop-based optimizations inside a superoptimizer.

     Supporting loops in a superoptimizer, can yield up to 12x performance improvements over state-of-the-art compilers. A fundamental requirement for realizing this, is an assembly program equivalence checker that can support loops. We report the design and implementation of an equivalence checker, Q, that supports programs with loops, and discuss our experiences with testing it both within a superoptimizer, and on end-to-end compiler-based transformations on programs much larger than the current capabilities of any superoptimizer.

     


  • Verifying Security Properties in Modern SoCs Using Instruction-Level Abstractions


    Speaker:

    Pramod Subramanyan, Princeton

    Date:2016-08-29
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM #001
    Abstract:

    Modern Systems-on-Chip (SoC) designs comprise interacting intellectual property (IP) blocks which are often sourced from third-party vendors and contain both hardware accelerators and programmable cores executing firmware. Verifying that SoCs meet their security requirements in this context is especially challenging. These challenges relate to (i) specifying security properties for verification, and (ii) co-verification of these properties across firmware (FW) and hardware (HW).

    This talk will describe a methodology for system-level security verification of SoCs. We address the FW/HW co-verification challenge by raising the level of abstraction of the hardware modules to be similar to that of instructions in software/firmware. This abstraction, referred to as an instruction-level abstraction (ILA), plays a role similar to the instruction set architecture (ISA) for general purpose processors and enables high-level analysis of SoC firmware. In particular, the ILA can be used instead of the cycle-accurate and bit-precise register transfer level (RTL) implementation for scalable verification of system-level security properties in SoCs.

    Manual construction of the ILA in the context of third-party IPs can be challenging. Hence, we introduce techniques to semi-automatically synthesize the ILA using a template abstraction and directed simulations of the SoC hardware. We describe techniques to ensure that the ILA is a correct abstraction of the underlying hardware implementation. We then show how the ILA can be used for SoC security verification by designing a specification language for security properties and an algorithm based on symbolic execution to verify these properties. We discuss two case studies of applying ILA-based verification to (i) an SoC built out of open source components and (ii) part of a commercial SoC. The methodology discovered several bugs in the RTL implementations, simulators and firmware.


  • Testing Quantum Devices and Quantum Mechanics


    Speaker:

    Prof Umesh Vazirani, EECS, UC Berkeley

    Date:2016-08-26
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The tremendous recent progress in the physical realization of devices based on the principles of quantum mechanics also throws up a fundamental challenge: how to test quantum devices, which are by nature imperfect and susceptible to uncontrollable faults. The classical verifier of such a device is necessarily at a disadvantage due to the
    exponential power of quantum systems, but nevertheless, an exciting sequence of results show that uniquely quantum features such as entanglement (Einstein's "spooky action at a distance") can be leveraged to make such testing possible. I will describe how such testing can thwart a malicious adversary in quantum cryptographic settings such as  quantum key distribution and certifying quantum random numbers. More sophisticated such schemes can be used to verify that a quantum computer is truly quantum. At a conceptual level, such tests of quantum devices are really tests of quantum mechanics, that go well beyond the famous Bell tests that disproved Einstein's objections to quantum mechanics.

    I will also describe a more pragmatic but principled approach to the testing of large scale quantum annealers - by performing a quantum Turing test comparing the quantum annealer to a suitable classical benchmark. I will discuss the results of applying such a test to the D-Wave 108 qubit quantum annealer, as well as the ~1000 qubit D-Wave 2X quantum annealer.

    The talk will be accessible to broad audience of computer scientists, physicists and mathematicians.


    Bio:

    Umesh Vazirani is the Rogrer A. Strauch Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and director of the Berkeley Quantum Computation Center. He is one of the founders of the field of quantum computation, a recipient of the Fulkerson Prize in discrete math awarded by the American Mathematical Association, and author of the books “An Introduction to Computational Learning Theory'' with Michael Kearns, and "Algorithms" with Sanjoy Dasgupta and Christos Papadimitriou.

     



  • GPS: An Indispensable Technology


    Speaker:

    Prof. Pratap Misra, Tufts University

    Date:2016-08-24
    Time:16:00:00 (IST)
    Venue:SIT SEMINAR ROOM
    Abstract:

    GPS in many ways is like the Internet. Both are gifts of the U.S. Department of Defense (DoD) to the civil community. Both continue to transform the way we do ordinary, everyday things as individuals and society, delivering wide-ranging economic and social benefits far beyond anything their designers could have dreamt of. And, of course, the Internet now plays an important role in newer GPS applications.

    Operational for about 20 years, GPS is now used by over 1 billion people, mostly as a receiver-on-a-chip built into their smart phones. GPS has also raised our expectations: we now demand meter-level positioning capability anytime and anywhere, including indoors. And, inevitably, success brings imitators: GPS-like systems are under development in Russia, Europe, China, Japan, and India.

    This talk will focus mostly on the principles behind the operation of GPS and the technologies required to realize a global radio-navigation system based on them.

     


    Bio:

    Pratap Misra, a member of the first batch of students to graduate from IITK, has worked in the field of navigation satellites for 25 years starting with a project at MIT Lincoln Laboratory to combine measurements from GPS and GLONASS, the Soviet answer to GPS, to improve navigation for civil aviation. He is a coauthor with Professor Per Enge of Stanford University of a widely used graduate-level engineering textbook on GPS.

    Misra is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He is also a Fellow of the Institute of Navigation (ION), which honored him in 2014 with the Kepler Award "for sustained and significant contributions to the development of satellite navigation."

     



  • Timed Network Games


    Speaker:

    Shibashis Guha, Hebrew University

    Date:2016-08-23
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Network games are widely used as a model for selfish resource-allocation problems. In the classical model, each player selects a simple path connecting her source and target vertex. The cost of using an edge depends on the number of players that use it, and abstracts the fact that different users may use a resource at different times and for different durations. In reality, however, time plays an important role. Sharing the cost of a taxi, for example, makes sense only when traveling together.

    We study timed network games, which add a time component to network games. Each vertex v in the network is associated with a cost function, mapping the load on v to the price that a player pays for staying in v for one time unit with this load. In addition, each edge has a guard, describing time intervals in which the edge can be traversed, forcing the players to spend time on vertices. Unlike earlier work that add a time component to network games, the time in our model is continuous and cannot be discretized. In particular, players have uncountably many strategies, and a game may have uncountably many pure Nash equilibria. We study properties of timed network games with cost-sharing or congestion cost functions: their stability, equilibrium inefficiency, and complexity. In particular, we show that the answer to the question whether we can restrict attention to boundary strategies, namely ones in which edges are traversed only at the boundaries of guards, is mixed.

    (Joint work with Guy Avni and Orna Kupferman)


  • Separations and Lower Bounds in Decision Tree Complexity


    Speaker:

    Swagato Sanyal(TIFR)

    Date:2016-08-22
    Time:15:00:00 (IST)
    Venue:CSE Discussion Room (IIA 425)
    Abstract:

    The Decision Tree model is one of the simplest models of computation, where it is often possible to prove unconditional lower bounds. There are different kinds of decision trees: deterministic, randomized with zero-error or bounded-error, quantum, etc. A central question in decision tree complexity is whether randomness really helps in this model, which has long been answered in the affirmative. Interestingly however, the exact relationship between different decision tree complexity measures, and the widest separation possible amongst them, was not well-understood for a long time until recently by a series of works many of these fundamental questions got answered. In this talk we will review this recent series of developments, restricting ourselves mostly to deterministic and different randomized decision tree complexity measures.


  • Engineering Software Defined Network (SDN) for Scale


    Speaker:

    Deepak Bansal, Microsoft USA

    Date:2016-08-09
    Time:15:00:00 (IST)
    Venue:SIT Seminar Room
    Abstract:

    Microsoft Azure is one of the largest public cloud infrastructure in the world. The network infrastructure for Microsoft Azure is built on a highly scalable and reliable Software Define Network (SDN). In this talk, Deepak will talk about the SDN implementation in Microsoft Azure – motivation, capabilities, architecture, components, etc. He will specifically focus on the challenges dealing with scale and delivering the high reliability and what has been done to meet those challenges and learnings obtained.

     


    Bio:

    Deepak Bansal runs the Software Defined Network (SDN) team for Microsoft. He has been leading Networking for Azure since 2008, being a member of the founding team for Microsoft Azure. He pioneered the implementation and deployment of SDN in Microsoft Azure scaling it to millions of customer networks and delivering millions of automated customer initiated network changes per day. He is active member of the SDN community, chairing configuration management working group in ONF and part of ONF board. Prior to Azure/SDN, he led the development of NetIO TCP/IP stack and IPv6 implementation for Windows. He has masters and bachelors in computer science from MIT and IIT Delhi respectively. He holds over 20 issued patents. His areas of interest include SDN, NFV, congestion control, wireless networks, network protocols, cloud, distributed systems, reliability and scale.

     

     



  • Lightweight Formal Methods for LLVM Verification


    Speaker:

    Prof. Santosh Nagarakatte from CS Department, Rutgers University

    Date:2016-08-08
    Time:03:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Peephole optimizations perform local rewriting to improve theefficiency of the code input to the compiler. These optimizations are a persistent source of bugs. The talk will present Alive and Alive-FP, domain-specific languages for writing integer and floating-point related peephole optimizations in LLVM, respectively. A transformation expressed in these DSLs is shown to be correct automatically by encoding the transformation into constraints whose validity is checked using a Satisfiability Modulo Theory (SMT) solver. Furthermore, an optimization expressed in the DSL can be automatically translated into C++ code that is suitable for inclusion in an LLVM optimization pass.

    These DSL’s are based on an attempt to balance usability and formal methods. We have discovered numerous bugs in the LLVM compiler and the Alive tool is actively used by LLVM developers to check new optimizations. I will also highlight the challenges in incorporating lightweight formal methods in the tool-chain of the compiler developer
    and the lessons learned in this project. I will conclude by briefly describing other active projects in my group on approximation, parallel program testing, and verification.


    Bio:

    Santosh Nagarakatte is an Assistant Professor of Computer Science at Rutgers University. He obtained his PhD from the University of Pennsylvania in 2012. His research interests are in Hardware-Software Interfaces spanning Programming Languages, Compilers, Software Engineering, and Computer Architecture. His papers have been selected as IEEE MICRO TOP Picks papers of computer architecture conferences in 2010 and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI 2015 Distinguished Paper Award, ACM SIGSOFT ICSE 2016 Distinguished Paper Award, and the Google Faculty Research Award in 2014 for his research on LLVM compiler verification.



  • Cheap approximate localization using FM Radio


    Speaker:

    Piyush Kumar 

    Date:2016-05-12
    Time:14:30:00 (IST)
    Venue:IIA, 101 , seminar room Bharti building
    Abstract:

    It is hard to imagine a world without GPS. Unfortunately, GPS might be jammed, spoofed or become unavailable. In this talk, I will present a coarse and passive localization system for GPS-denied environments. Our system is based on a cheap software defined radio~(SDR) costing , which is used to listen to broadcast signals from local FM Radio stations. We show that the hardware and associated algorithms are capable of localizations with average errors of less than 5 miles, without requiring a fingerprinting or crowd sourcing approach.


    Bio:

    Piyush Kumar is currently visiting IIT Delhi on a Fulbright fellowship. He received his B.Sc degree in Mathematics and Computing from IIT Kharagpur in 1999 and a Ph.D. in Computer Science from Stony Brook University in 2004. Prior to joining FSU as an Assistant Professor in 2004, he was a visiting scientist at MPI-Saarbruecken, Germany. He currently is an Associate Professor of Computer Science at Florida State University.



  • Exploiting Maximal Overlap for Non-Contiguous Data Movement On Modern GPU Enabled Clusters


    Speaker:

    Dipsankar Banerjee, Ohio State University

    Date:2016-05-09
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In modern generation high end computing platforms, Message Passing Interface (MPI) continues to hold wide spread popularity due to its portability and features that allow applications to tap the power of today's high performance computers. In recent years, high performance clusters have been mostly heterogeneous and a compute node is equipped with a high throughput accelerator such as a Graphics Processing Unit (GPU). Keeping such heterogeneity in mind, MPI has evolved to allow application developers to utilize the compute power of such accelerated nodes through the design of new communication protocols that can perform communications over GPU memories in a transparent manner. This is now well established as "CUDA aware MPI".

    In this talk, I will present the current state-of-the art designs for CUDA aware MPI implementation and then provide an overview of how CUDA aware MPI has been enhanced in the recent time to design a mechanism for higher overlap between CPU compute and GPU based communication. I will elaborate on how the massive parallelism offered by GPUs can be combined with some of the modern features provided by the GPU runtime to design a high throughput mechanism for non-contiguous data movement that can provide higher system efficiency and overlap.


  • Interactive Machine Learning for Information Extraction


    Speaker:

    Sameer Singh

    Date:2016-05-02
    Time:15:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Most of the world's knowledge, be it factual news, scholarly research, social communication, subjective opinions, or even fictional content, is now easily accessible as digitized text. Unfortunately, due to the unstructured nature of text, much of the useful content in these documents is hidden. The goal of information extraction is to address this problem: extracting meaningful, structured knowledge (such as graphs and databases) from text collections. The biggest challenges when using machine learning for information extraction include the high cost of obtaining annotated data and lack of guidance on how to understand and fix mistakes.

    In this talk, I propose interpretable representations that allow users and machine learning models to interact with each other: enabling users to inject domain knowledge into machine learning and for machine learning models to provide explanations as to why a specific prediction was made. I use these techniques to extract relations between entities that are expressed in text. I first describe how symbolic domain knowledge, if provided by the user as first-order logic statements, can be injected into embedding-based machine learning models to improve the predictions. In the second part of the talk, I present an approach to “explain” machine learning predictions using symbolic representations, which the user may annotate for more effective supervision. I present experiments to demonstrate that an interactive interface between a user and machine learning is effective in reducing annotation effort and in quickly training accurate extraction systems.


    Bio:

    Sameer Singh is a Postdoctoral Research Associate at the University of Washington, working on large-scale and interactive machine learning applied to information extraction and natural language processing. He received his PhD from the University of Massachusetts, Amherst, during which he also interned at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was recently selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, has been awarded the Yahoo! Key Scientific Challenges and the UMass Graduate School fellowships, and was a finalist for the Facebook PhD fellowship. (http://sameersingh.org)



  • Analytics - An insightful enabler across the hydrocarbon value chain


    Speaker:

    Steve Cherek and Vinit Verma

    Date:2016-04-29
    Time:10:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The techno-economic dynamic is shifting the balance from an Industrial Paradigm to a Digital paradigm. Cheap information processing, storage, and transmission is allowing more data to be processed, and more people and devices to be connected than ever before. This is resulting in datafication - a process by which phenomena are transformed into quantifiable formats. We now have a quantitative basis for decision making like never before: We can represent reality to a much
    greater degree - tightening the feedback loop. Ubiquitous data makes possible new connections and value opportunities, both within and across industry value chains

    Data mediated connections open new opportunities to leverage network effects In an industrial economy, scale determined winners and losers, in the information economy taking advantage of network effects determines success. We now compete on insights and not on the paradigm of data ownership. The challenge is to get to the Right Data in Big Data. The talk will discuss how ExxonMobil is responding to this techno-economic dynamic using data science and analytics. We will cover the scale and diversity of ExxonMobil operations, and the breadth and depth of IT capability which is needed to run our enterprise. Finally we will show how Analytics is a key enabler for transformational outcomes across the hydrocarbon value chain.


    Bio:

    Steve Cherek ExxonMobil Applications Manager - Information Technology
    Steve received his undergraduate degree in Management Information Systems from the University of Oklahoma in 1989. That same year he began his career with Exxon Company as an application support analyst in Houston. Throughout his career, Steve has held a variety of business and IT roles. In 1999, he became the IT manager of Esso Thailand, based in Bangkok. In 2001, he served as a Senior Business Analyst for the U.S. Fuels Marketing, then as a Regional Sales Manager for Lubricants and Petroleum Specialties. He re-joined IT in various Business IT Manager positions. Steve assumed his current role as Applications Manager in 2011. Steve currently resides in Houston with his wife and two children. He is active with United Way, and is a member of the Board of Directors of the DePelchin Children's Center, a United Way Agency.

    Vinit Verma ExxonMobil Analytics Manager - Information Technology
    During his 25 years at ExxonMobil, Vinit has held a variety of global computing positions supporting the Exploration Company, R&D organizations, Chemical manufacturing organization, Fuels Retail Operations and Upstream Financials. As founder of techSpring innovation, he has a passion for the process of innovation. Vinit has Silicon Valley consulting experience with HP, Sun Microsystems and networking & insurance companies. His interests include research work with MIT on collaborative innovation networks. Vinit received a Master of Science degree in Computer Science from Oklahoma State University, and a Bachelor of Sciences degree (Physics, Chemistry, Math) from St. Stephen's College, Delhi.



  • Security Games: Key Algorithmic Principles, Deployed Applications and Research Challenges


    Speaker:

    Prof. Milind Tambe from University of Southern California, Los Angeles,

    Date:2016-04-22
    Time:17:00:00 (IST)
    Venue:LHC
    Abstract:

    Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, protecting endangered wildlife, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times; instead, we must optimize the use of limited security resources. To that end, our "security games" framework -- based on computational game theory, while also incorporating elements of human behavior modeling, AI planning under uncertainty and machine learning -- has led to building and deployment of decision aids for security agencies in the US and around the world. These decision aids are in use by agencies such as the US Coast Guard, the Federal Air Marshals Service and by various police agencies at university campuses, airports and metro trains.
    Moreover, recent work on "green security games" has led our decision aids to begin assisting NGOs in protection of wildlife; and "opportunistic crime security games" have focused on suppressing urban crime. I will discuss our use-inspired research in security games that is leading to new research challenges, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors.


    Bio:

    Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC). He is a fellow of AAAI and ACM, as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, INFORMS Wagner prize for excellence in Operations Research practice, Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa foundation faculty research award, RoboCup scientific challenge award, and other local awards such as the Orange County Engineering Council Outstanding Project Achievement Award, USC Associates award for creativity in research and USC Viterbi use-inspired research award. Prof. Tambe has contributed several foundational papers in AI in areas such as multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the influential paper award and a dozen best paper awards at conferences such as AAMAS, IJCAI, IAAI and IVA. In addition, Prof. Tambe pioneering real-world deployments of ''security games'' has led him and his team to receive the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District's Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by LA Airport police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. He has also co-founded a company based on his research, ARMORWAY, where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.



  • Better algorithmic bound for the Koml'{o}s discrepancy problem


    Speaker:

    Shashwat Garg, TU Eindhoven

    Date:2016-04-22
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The Koml'{o}s conjecture and its special case Beck-Fiala conjecture are challenging open problems in discrepancy theory and convex geometry. The best known bounds of both are due to Banaszczyk from 1998, using a non-constructive convex geometric argument. While there had been a lot of progress in past few years in algorithms for discrepancy theory, matching Banaszczyk's bounds algorithmically seemed challenging. I will talk about our recent work which gives an algorithm giving the same bounds as Banaszczyk. I will also mention something about the more general theorem proved by Banaszczyk. Our result uses ingredients from semidefinite programming duality, martingales theory and convex geometry.


  • Online algorithms with recourse


    Speaker:

    Prof. Amit Kumar, Computer Science, IIT Delhi

    Date:2016-04-21
    Time:15:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Online algorithms model situations where the input data arrives over time, and the algorithm needs to take decisions without knowing the future input. They are typically analyzed in the framework of competitive analysis -- the performance of such an algorithm is compared against an adversary which knows the entire future beforehand. Although this has been a very fruitful approach, it often leads to pessimistic results because the adversary is much more powerful than the online algorithm. Recently, there have been attempts to evolve alternate ways of analyzing online algorithms which give more power to the online algorithm (as compared to the offline adversary). I will discuss some recent work on models which allow the algorithm to change a small number of decisions taken in the past. Drawing from examples in scheduling and graph algorithms, I will show that one can get interesting results in this ''online algorithms with recourse'' model.


  • Adaptive Broadcast by Fault-Local Self-Stabilizing Spanning Tree Switching


    Speaker:

    Sushanta Karmakar, IIT Guwahati

    Date:2016-04-11
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Switching between protocols is an elegant idea of enabling adaptation in distributed systems. Self-stabilizing algorithms have been proposed as a mechanism to handle transient failures in distributed systems. In this work we illustrate self-stabilization in distributed protocol switching by proposing a self-stabilizing distributed algorithm for dynamically switching from one spanning tree to another for broadcasting messages. Both the trees are precomputed and rooted at the same node, which is the source of the broadcast. Different properties relating to the delivery of broadcast messages under various failure conditions are investigated. More specifically, the algorithm guarantees that under no failure each broadcast message is correctly delivered to all the nodes in spite of switching. Under arbitrary transient failure in the system, the switching eventually completes, though no specific guarantee can be given on the delivery of broadcast messages. Finally we investigate the properties that can be guaranteed on the delivery of broadcast messages under a single transient failure. We also consider the case when the trees are not precomputed.


  • Automata and logics over infinite alphabets


    Speaker:

    Amaldev Manuel, CMI

    Date:2016-03-31
    Time:14:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Languages over infinite alphabets provide a useful abstraction for systems with an element of unbounded data, for instance identity information. In this talk I will give an overview of proposals, including our own, to the paradigmatic question of finding suitable logical, automatic, and algebraic models for them.


  • High performance parallel computing software: from basics to exa-scale technology


    Speaker:

    Dr. Reiji SUDA

    Date:2016-03-22
    Time:11:30:00 (IST)
    Venue:404, Bharti building
    Abstract:

    Computer clock frequency is not improved for these years, and progress of semiconductor technology contributes computing performance mostly through higher parallelism. Not only supercomputers, but also PCs and embedded processors now have multiple cores, and will have 64 or 128 cores in a predictable future. Supercomputer performance will be close to 1 exa (1,000,000,000,000,000,000) flops.
    In this lecture, basic concepts of software (i.e. programming) technology for high performance parallel computing are introduced. Analysis on key factors of parallel processing performance reveals that, parallelism is not enough for performance, but locality is also essential. Key features of predicted parallel processors in the next generation will be briefly inspected, and some recent research efforts toward exa-scale parallel processing will be explained.


    Bio:

    Dr. Reiji SUDA is a Professor of Department of Computer Science, the University of Tokyo. From his master's work in early 1990's, he is working in high performance numerical computing, such as: parallel sparse linear solver for circuit simulation, parallel Hessenberg double-shift QR algorithm, Fast Multipole Method (FMM), fast spherical harmonic transform, task scheduling for parallel systems, automatic performance tuning (or autotuning), GPGPU, power-efficient computing, and communication-avoiding algorithms. He teaches courses in numerical algorithms for undergraduate students, and a course in parallel numerical computing for graduate students in the University of Tokyo.



  • Some problems in low-level computer vision and 3D multi-view geometry


    Speaker:

    Pulak Purakit, ISI Kolkata 

    Date:2016-03-21
    Time:12:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In this talk, I will address some existing problems in low-level computer vision and 3D multi-view geometry

    On the Morphological Regularizations for Super-resolution image reconstruction
    ∗ Why do we care about this problem? - Regularization is a well-known technique to convert an ill-posed problem into a well-posed one and then obtain a stable solution.
    ∗ What methods have been studied? - Researchers have been studying total variation (TV) based regularization methods for the general inverse problem for last few      decades. However, the images reconstructed by these methods, sometimes produce ringing artifacts near strong edges.

    ∗ What do we propose? - Based on multi-scale morphology, we develop non-linear regularization methods for edge preserving image reconstruction for general ill-posed inverse problems (in particular Super-Resolution image reconstruction) that produce better-reconstructed images.
    On the case of large Hyperedges for higher order clustering
    ∗ Why do we care?- Most of the existing works on hypergraph clustering have considered the smallest possible hyperedge size, due to a lack of study about the potential benefits of large hyperedges and effective algorithms to generate them.
    ∗ What's the answer?-Here, we establish that the large hyperedges are better from both theoretical and empirical standpoints.
    ∗ What's the approach?-We propose a novel guided sampling strategy for large hyperedges, based on the concept of random cluster models. Our method can generate pure large hyperedges that significantly improve grouping accuracy without exponential increase in sampling costs.
    ∗ What is the implication of the answer?-For higher order clustering always practice large hyperedges and use a guided sampling method to sample them.
    Globally optimal maximum consensus using ASTAR:
    ∗ Why is this problem so important in 3D vision? - Maximum consensus is arguably the criterion of choice for robust estimation tasks in computer vision. Despite its widespread use, optimizing the criterion is still customarily done by randomized sample-and-test techniques.
    ∗ What's the solution we propose? - We aim to change this state of affairs by proposing a very efficient algorithm for global maximization of consensus. Under the framework of LP-type methods, we show how consensus maximization for a wide variety of vision tasks can be posed as a tree search problem.
    ∗ What's the impact on literature? - By returning globally optimal estimates within tens of seconds on typical-sized problems, our approach provides a practical alternative to randomized techniques.


    Bio:

    Currently, he is a visiting scientist at ISI Kolkata. Presviously, he was a postdoctoral researcher at University of Adelaide, Australia (Sep'13- Feb'16). He will be joining Birmingham University, UK as a Research Associate from April 2016. He completed his M.Tech (2009) and PhD (2013) in CS from ISI Kolkata. While he pursued his B.Sc.in Mathematics (2005) and M.Sc. in Applied Mathematics (2007) from the University of Calcutta .
    Interests: His research interest includes Image processing, computer vision, 3D multiple-view geometry, and machine learning algorithms for 3D-geometric problems.



  • "Exploiting Symmetries in Statistical (Relational) Models for Efficient Inference"


    Speaker:

    Dr. Parag Singla

    Date:2016-03-11
    Time:15:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Several domains of interest including those in social network analysis, biology, vision, NLP and IE need to represent the underlying relational structure as well as model uncertainty. Statistical relational models such as Markov logic achieve this by combining the power of relational representations (e.g. first-order logic) with statistical models. To efficiently perform reasoning in these models, lifted inference exploits the underlying symmetry by grouping together objects (or states) which behave similar to each other, and hence, have the same associated probabilities. In this talk, we will examine two recent advances in exploiting symmetries for lifted inference : 1) Lifting in presence of constraints over the formulas in the theory 2) Lifting with contextual symmetries i.e. symmetries available only under specific assignments to a subset of variables. We will show that the proposed techniques significantly advance the state of the art both at a conceptual level as well as empirically. We will conclude the talk with some related work and frontiers in the future.


  • "Similarity of Weighted Tree/Graph Structures and Applications"


    Speaker:

    Prof Virendra Bhavsar from University of New Brunswick

    Date:2016-02-25
    Time:16:30:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    The notion of similarity (or matching) has played a very important role from the beginnings of artificial intelligence. The matching process, depending on the application, may involve keyword matching (e.g. Google), schema matching, taxonomic similarity and ontology matching, or other types of similarities. Clustering forms one of the basic computations in many Big Data applications and it is based on the concept of similarity (or distance).

    We have developed novel weighted tree/graph similarity algorithms applicable to many domains, e.g. bioinformatics, e-Business, e-Health, e-Learning and semantic web. We have also carried out high performance implementations of these algorithms on cluster and graphics processing units (GPUs). This talk will present an overview of the algorithms and their applications.

    Dr. Virendrakumar C. Bhavsar is an Honorary Research Professor, the founding Director of the Advanced Computational Laboratory, and former Professor as well as Dean of the Faculty of Computer Science at the University of New Brunswick. He is a Fellow of the International Institute of Cognitive Informatics and Cognitive Computing. He received the B.Eng. (Electronics and Telecommunications) from University of Poona, India, and the M.Tech. (Electrical Eng.) as well as Ph.D. (Electrical Eng.) degrees from the Indian Institute of Technology, Bombay. He was a faculty member at the Department of Computer Science and Engineering, Indian Institute of Technology, Bombay, 1974-83. Since 1983 he has been at the University of New Brunswick, Canada.

    He was a founding member, Board of Directors, C3.ca, Inc. – a high performance computing initiative in Canada. He is President of the SAI Super and Intelligent Computer Systems, Inc., Fredericton. He has been a consultant to industries in the areas of semantic web, semantic search and matchmaking, artificial intelligence and high performance computing. He has published more than 160 research papers in journals, conference proceedings, as well as book chapters, and has edited four volumes. He co-led the bioinformatics component of the Canadian Potato Genomics project. Recently, he led the Province of New Brunswick component of the Atlantic Computational Excellence Network (ACEnet) – a + million high performance computing initiative in Atlantic Canada. Current areas of research include intelligent systems with applications in e-Business and e-Learning, semantic search and semantic matchmaking, natural language processing and deep learning.

     


  • Low Distortion Inter-surface Mapping via Alternating Convex Optimizations


    Speaker:

    Manish Mandad

    Date:2016-02-18
    Time:16:00:00 (IST)
    Venue:404, Bharti Building
    Abstract:

    We present a novel approach to creating a homeomorphic map between two discrete surfaces. While most previous approaches compose maps over intermediate domains, we directly optimize a map between two surfaces by computing a variance-minimizing mass transport plan. We solve this non-linear problem, which is akin to minimizing the Dirichlet energy of both the map and its inverse, using two alternating convex optimization problems for which efficient numerical algorithms exist. A coarse-to-fine approach based on fast evaluation of geodesic distances through diffusion distances is then employed to provide a robust and efficient inter-surface mapping algorithm that naturally finds correspondences between shapes, with little to no user interaction.


    Bio:

    Manish Mandad is a Phd student at INRIA Sophia Antipolis working with Dr. Pierre Alliez. He will soon start his post-doc in Prof. Leif Kobbelt's group at RWTH Aachen. He is an IIT Delhi alumnus (B.Tech in comp sc. 2005-09).



  • Airports and Railways: Facility Location Meets Network Design


    Speaker:

    Tobias Moemke, University of Saarlandes

    Date:2016-02-10
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    We introduce a new framework of Airport and Railway Problems, which combines capacitated facility location with network design. In this framework we are given a graph with weights on the vertices and on the edges, together with a parameter k. The vertices of the graph represent cities, and weights denote respectively the costs of opening airports in the cities and building railways that connect pairs of cities. The parameter k can be thought of as the capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting the cities, where each connected component in the network spans at most k vertices, contains an open airport, and the network satisfies some additional requirements specific to the problem in the framework.

    We consider two problems in this framework. In the AR_F problem there are no additional requirements for the network. This problem is related to capacitated facility location. In the AR_P problem, we require each component to be a path with airports at both endpoints. AR_P is a relaxation of the capacitated vehicle routing problem (CVRP).

    We consider the problems in the two-dimensional Euclidean setting. We show that both AR_F and AR_P are NP-hard, even for uniform vertex weights (i.e., when the cost of building an airport is the same for all cities). On the positive side, we provide polynomial time approximation schemes for AR_F and AR_P when vertex weights are uniform. We also investigate AR_F and AR_P for k = infinity. In this setting we present an exact polynomial time algorithm for arf with general vertex costs, which also works for general edge costs. In contrast to AR_F, AR_P remains NP-hard when k=infinity, and we present a polynomial time approximation scheme for general vertex weights.


  • On approximating strip packing with a better ratio than 3/2


    Speaker:

    Andreas Wiese, MPI-Informatik

    Date:2016-02-10
    Time:16:30:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    In the strip packing problem we are given a set of rectangular items that we want to place in a strip of given width such that we minimize the height of the obtained packing. It is a very classical two-dimensional packing problem that has received a lot of attention and it has applications in many settings such as stock-cutting and scheduling. A straight-forward reduction from Partition shows that the problem cannot be approximated with a better absolute factor than 3/2. However, this reduction requires the numeric values to be exponentially large. In this paper, we present a (1.4+ε)-approximation algorithm with pseudo-polynomial running time. This implies that for polynomially bounded input data the problem can be approximated with a strictly better ratio than for exponential input which is a very rare phenomenon in combinatorial optimization. 


  • "Facility Location in dynamic networks"


    Speaker:

    Professor Binay Bhattacharya, Simon Fraser University

    Date:2016-02-09
    Time:12:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    This talk discusses the problem of locating facilities in dynamic networks. In dynamic flow models it takes time for the flow to pass an arc, the flow can be delayed at nodes. An edge’s capacity represents the amount of items that can enter the edge in a unit time. Dynamic flow problems require moving a set of items from source nodes to sink nodes in minimal time. Congestion occurs at a vertex when more items start waiting at the vertex to enter an edge than can immediately leave the vertex. Finding a quickest dynamic flow usually requires designing flows that avoid (too much) congestion.
    These problems have been motivated by the problem of evacuation planning during natural disasters such as earthquakes, weather disturbances. Each vertex of a dynamic flow network represents a source with a certain number of people. The objective of the planning is to determine an evacuation schedule to move everybody from all the source nodes to evacuation centers (sinks) using minimal time. The evacuation problem can be viewed as a generalization of classical k-center and k-median problems.


  • Improving Network Security Through Insurance - A Tale of Cyber-Insurance Markets


    Speaker:

    Ranjan Pal

    Date:2016-01-29
    Time:16:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    In recent years, security researchers have well established the fact that technical security solutions alone will not result in a robust cyberspace due to several issues jointly related to the economics and technology of computer security. In this regard some of them proposed cyber-insurance to be a suitable risk management technique that has the potential to jointly align with the various incentives of security vendors (e.g., Symantec, Microsoft, etc.), cyber-insurers (e.g., security vendors, ISPs, cloud providers, etc.), regulatory agencies (e.g., government), and network users (individuals and organizations), in turn paving the way for robust cyber-security. In this talk, we ask the following important question: can cyber-insurance really improve the security in a network? To answer our question we adopt a market-based approach. We analyze regulated monopolistic, and competitive cyber-insurance markets in our work, where the market elements consist of risk-averse cyber-insurers, risk-averse network users, a regulatory agency, and security vendors (SVs). Our analysis proves that technical solutions will alone not result in optimal network security, and leads to two important results: (i) without contract discrimination amongst users, there always exists a unique market equilibrium for both market types, but the equilibrium is inefficient and does not improve network security, and (ii) in monopoly markets, contract discrimination amongst users results in a unique market equilibrium that is efficient and results in improvement of network security - however, the cyber-insurer can make zero expected profit. The latter fact is often sufficient to de-incentivize the formation or practical realization of successful and stable cyber-insurance markets. To alleviate this problem, we suggest SVs to enter a symbiotic relationship with insurers and lock the latter's clients via a fair, (topology, investment) dependent, and differentiated pricing mechanism, for the security products manufactured by the SVs. In return, for an observed increase in profit, the SVs will share a portion of their profit increase with the cyber-insurers. We mathematically study various pricing mechanisms designed via a common Stackelberg game framework, for a monopoly market setting. Among them, the optimal pricing problem in the binary pricing setting turns out to be NP-hard, for which we develop an efficient randomized approximation algorithm that achieves a pricing very close to the optimal. Our game analysis, combined with market simulation results on practical networking topologies for both monopoly and oligopoly markets illustrate the potential for cyber-insurers to have market success.


    Bio:

    Ranjan Pal is a Research Scientist at the University of Southern California (USC), affiliated with both the Electrical Engineering and Computer Science departments, where he co-leads the Quantitative Evaluation and Design Group (QED) with Professor Leana Golubchik. He received his PhD in Computer Science from USC in 2014, and was the recipient of the Provost Fellowship throughout his PhD studies. During his PhD, Ranjan held visiting scholar positions at the School of Engineering and Applied Science, Princeton University, USA, and Deutsch Telekom Research Laboratories (T-Labs), Berlin, Germany. His primary research interests lie in the performance modeling, analysis, and design of cyber-security, privacy, communication networks, and the Smart Grid, using tools from economics, game theory, applied probability, algorithms, and mathematical optimization. His research has generated press interests from USC News, and MIT Technology Review. Ranjan has also consulted for Accel Partners.



  • The complexity of expansion problems


    Speaker:

     L. Anand, Princeton

    Date:2016-01-25
    Time:16:00:00 (IST)
    Venue:Bharti Building
    Abstract:

    Graph-partitioning problems are a central topic of research in the study of algorithms and complexity theory. They are of interest to theoreticians with connections to error correcting codes, sampling algorithms, metric embeddings, among others, and to practitioners, as algorithms for graph partitioning can be used as fundamental building blocks in many applications. One of the central problems studied in this field is the sparsest cut problem, where we want to compute the cut which has the least ratio of number of edges cut to size of smaller side of the cut. This ratio is known as the expansion of the cut.
    In this talk, I will discuss three notions of expansion - edge expansion in graphs, vertex expansion in graphs and edge expansion in hypergraphs. I will define suitable notions of spectra and show how the notion of the celebrated "Cheeger's Inequality" goes across these three problems. I will also talk about higher order variants of these notions of expansion (i.e. notions of expansion corresponding to partitioning the graph/hypergraph into more than two pieces, etc.), and also about approximation algorithms for these problems.


  • From Vision-Realistic Rendering to Vision Correcting Displays


    Speaker:

    Professor Brian A. Barsky, 

    Computer Science Division
    School of Optometry
    Berkeley Center for New Media
    Berkeley Institute of Design
    Arts Research Center
    University of California, Berkeley

    Date:2016-01-22
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Present research on simulating human vision and on vision correcting displays that compensate for the optical aberrations in the viewer's eyes will be discussed. The simulation is not an abstract model but incorporates real measurements of a particular individual's entire optical system. In its simplest form, these measurements can be the individual's eyeglasses prescription; beyond that, more detailed measurements can be obtained using an instrument that captures the individual's wavefront aberrations. Using these measurements, synthetics images are generated. This process modifies input images to simulate the appearance of the scene for the individual. Examples will be shown of simulations using data measured from individuals with high myopia (near-sightedness), astigmatism, and keratoconus, as well as simulations based on measurements obtained before and after corneal refractive (LASIK) surgery.

    Recent work on vision-correcting displays will also be discussed. Given the measurements of the optical aberrations of a user's eye, a vision correcting display will present a transformed image that when viewed by this individual will appear in sharp focus. This could impact computer monitors, laptops, tablets, and mobile phones. Vision correction could be provided in some cases where spectacles are ineffective. One of the potential applications of possible interest is a heads-up display that would enable a driver or pilot to read the instruments and gauges with his or her lens still focused for the far distance.


    Bio:

    Brian A. Barsky is Professor of Computer Science and Vision Science, and Affiliate Professor of Optometry, at the University of California at Berkeley, USA. He is also a member of the Joint Graduate Group in Bioengineering, an interdisciplinary and inter-campus program, between UC Berkeley and UC San Francisco, and a Fellow of the American Academy of Optometry (F.A.A.O.). Professor Barsky has co-authored technical articles in the broad areas of computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation. He is also a co-author of the book An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, co-editor of the book Making Them Move: Mechanics, Control, and Animation of Articulated Figures, and author of the book Computer Graphics and Geometric Modeling Using Beta-splines. Professor Barsky also held visiting positions in numerous universities of European and Asian countries. He is also a speaker at many international meetings, an editor for technical journal and book series in computer graphics and geometric modelling, and a recipient of an IBM Faculty Development Award and a National Science Foundation Presidential Young Investigator Award. Further information about Professor Barsky can be found at http://www.cs.berkeley.edu/~barsky/biog.html.



  • Test


    Speaker:

    Test

    Date:2016-01-21
    Time:12:02:00 (IST)
    Venue:Seminar Hall
    Abstract:

    Test


    Bio:

    Test



  • Monitoring Big, Distributed, Streaming Data


    Speaker:

    Prof. Assaf Schuster

    Date:2016-01-19
    Time:16:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    More and more tasks require efficient processing of continuous queries over scalable, distributed data streams. Examples include optimizing systems using their operational log history, mining sentiments using sets of crawlers, and data fusion over heterogeneous sensor networks. However, distributed mining and/or monitoring of global behaviors can be prohibitively difficult. The naïve solution which sends all data to a central location mandates extremely high communication volume, thus incurring unbearable overheads in terms of resources and energy. Furthermore, such solutions require expensive powerful central platform, while data transmission may violate privacy rules. An attempt to enhance the naïve solution by periodically polling aggregates is bound to fail, exposing a vicious tradeoff between communication and latency. Given a continuous global query, the solution proposed in the talk is to generate filters, called safe zones, to be applied locally at each data stream. Essentially, the safe zones represent geometric constraints which, until violated by at least one of the sources, guarantee that a global property holds. In other words, the safe zones allow for constructive quiescence: There is no need for any of the data sources to transmit anything as long as all constraints are held with the local data confined to the local safe zone. The typically-rare violations are handled immediately, thus the latency for discovering global conditions is negligible. The safe zones approach makes the overall system implementation, as well as its operation, much simpler and cheaper. The saving, in terms of communication volume, can reach many orders of magnitude. The talk will describe a general approach for compiling efficient safe zones for many tasks and system configurations.


    Bio:

    Prof. Assaf Schuster of the Computer Science Department at the Technion is an ACM fellow and a world leading expert of distributed and scalable data Mining, Big Data technologies analytics & prediction, Cyber security and system vulnerabilities, privacy preserving, cloud resource management and more. Prof. Schuster published more than 200 papers in highly selective conferences and journals, some of which won prestigious awards. He consulted leading hi-tech companies, such as IBM, HP, Microsoft, and Verint. He participated in the bumpy journey of quite a few startups, some of which were successful. His research group is well known for its contributions to the field of big data and scalable, real-time knowledge discovery in distributed data streams.

     



  • Architecting Next-Generation Mobile Platforms


    Speaker:

    Prof. Chita Das

    Date:2016-01-15
    Time:11:00:00 (IST)
    Venue:SIT Seminar room
    Abstract:

    The demand for feature-rich mobile systems such as wearables, smartphones and tablets have already outpaced the growth of traditional computing platforms. It is projected that SoCs with tens of cores and hundreds of IPs/accelerator will be designed to provide unprecedented level of features and functionality in future. Design of such mobile systems with required performance and power budgets along with other design constraints will be a daunting task for computer architects since any ad hoc, piece-meal solution is unlikely to result in an optimal design. This requires holistic exploration of the complete design space to understand the system-level design trade-offs.

    In this talk, I will discuss several research challenges and some of our ongoing efforts to address these issues. I will start with a comprehensive mobile system simulation platform, called GemDroid, for facilitating the execution of mobile applications and analyzing the performance and energy behavior of CPU, memory, and IPs collectively. Next, we will discuss the design of a heterogeneous memory controller and a sub-framing mechanism to enhance memory system performance. Finally, the scope for performance and power optimizations in the context of multiple applications will be outlined. The talk will conclude with pointers to some future research directions in this emerging area.


    Bio:

    Chita Das is a Distinguished Professor of Computer Science and Engineering at the Pennsylvania State University. His main areas of interest include CMPs and many-core architectures, GPUs, Clouds/datacenters, mobile platforms, performance evaluation, and fault-tolerant computing. In particular, he has worked extensively in the area of design and analysis of interconnection networks/on-chip interconnects. He has published over 200 papers in the above areas, has received several best paper awards, and has served on many program committees, and editorial boards. He is a Fellow of the IEEE.



  • Static and Dynamic Reasoning for SDNs NetPL


    Speaker:

    Prof. Shriram Krishnamurthi

    Date:2016-01-13
    Time:11:00:00 (IST)
    Venue:CSE Seminar Room Bharti Building 501
    Abstract:

    Software-Defined Networking enables new techniques for reasoning about network behaviour. Moreover, controller programs themselves are amenable to reasoning, independent of the network they control. This talk presents some recent work in both static and dynamic reasoning for SDNs and lays out a landscape for thinking about these issues.


    Bio:

    Shriram Krishnamurthi is a computer scientist, currently a professor of computer science at Brown University. and a member of the core development group for the Racket programming languages,] responsible for the creation of software packages including the Debugger, the FrTime package, and the networking library.
    Krishnamurthi received his PhD at Rice University in 2000, under the direction of Matthias Felleisen] His dissertation is on linguistic reuse and macro systems in the presence of first-class modules. Starting from this topic, Krishnamurthi has moved into software engineering and is working on topics such as access control, modularization of verification, web-based interactive programming, and more. His most recent effort is a novel, time-oriented programming language, called Flapjax, in support of asynchronous web programming. Krishnamurthi also authored a textbook on programming language design
    In 2012, Krishnamurthi became the inaugural winner of the Robin Milner Young Researcher Award, given by SIGPLAN to a researcher whose research career began within 20 years of the nomination date. The award citation describes Krishnamurthi as "a prolific researcher who brings programming language theory to bear in many other disciplines, thus exposing its foundational value"



  • A Markovian Approach to Modeling Preferences and Assortment Optimization


    Speaker:

    Vineet Goyal, Columbia U.

    Date:Fri, 8th Jan, 2016, 4PM
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Consider a seller with n (substitutable) products with fixed prices. Each consumer has a random preference (a permutation over products) and selects the most preferred product among the ones offered by the seller. The seller needs to decide on the subset of products to offer such that expected revenue over random preferences is maximized. This problem is referred to as the assortment planning problem in the literature and arises in many applications including retailing, recommendations and online advertising. One of the fundamental challenges in these problems is to identify a ``good'' model for consumer preferences and substitution behavior especially since the preferences are latent and not observable.
    We consider a Markovian approach to address the model selection problem where substitutions are modeled as state transitions in a Markov chain. We show that this model gives a simultaneous approximation for a large family of parametric preference models considered in the literature. Furthermore, the assortment optimization under this model can be solved efficiently. We present an iterative algorithm based on a ``local-ratio'' framework that gives an optimal for the unconstrained problem and a constant approximation for the capacity constrained version. The local-ratio framework allows us to linearize a non-linear revenue function and also provide interesting insights for the assortment optimization problem over other preference models.


  • Mining Communication Motifs in Dynamic Networks


    Speaker:

    Sayan Ranu, IIT Madras

    Date:Mon, 4th Jan, 2016
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    A fundamental problem in behavioral analysis of human interactions is to understand how communications unfold. In this talk, I will analyze this problem by mining communication motifs from dynamic interaction networks. A communication motif is a recurring subgraph that has a similar sequence of information flow. Mining communication motifs requires us to explore the exponential subgraph search space where existing techniques fail to scale. To tackle this scalability bottleneck, I will discuss a technique called COMMIT. COMMIT converts a dynamic graph into a database of sequences. Through careful analysis in the sequence space, only a small portion of the exponential search space is accessed to identify regions embedding communication motifs. Extensive experiments on three different social networks show COMMIT to be up to two orders of magnitude faster than baseline techniques. Furthermore, qualitative analysis demonstrates communication motifs to be effective in characterizing the recurring patterns of interactions while also revealing the role that the underlying social network plays in shaping human behavior.



2015 talks

  • High performance Graphics Hardware for Ray Tracing


    Speaker:

    Erik Brunvand ,  Associate Professor , Uttah University

    Date:Mon, 28th Dec, 2015, 11 AM
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    The desire for ever more complex and realistic graphics drives the computer graphics hardware industry. While traditional graphics rendering using a Z-buffer rasterization algorithm is well supported with commercial graphics processing units (GPUs), the desire to improve image quality encourages a new look at an alternative rendering algorithm: ray tracing. While rasterizing is used in virtually all real-time rendering applications (e.g. games that use GPUs for rendering), ray tracing software is now used in virtually all motion picture rendering because of the increased realism of the lighting in the rendered images. Ray tracing also has a fundamentally different type of parallelism than is naturally supported on GPUs. In this talk I will describe and compare the rasterization algorithm that is at the core of all commercial GPUs, and the ray tracing algorithm that is not well supported on GPUs, but supports more realistic lighting in the rendered images. I will also describe the impact of using ray tracing on the design of application specific processors.


    Bio:
    Erik Brunvand is an Associate Professor of Computer Science in the School of Computing at the University of Utah. He is also the Director of the Computer Engineering Program there.  His research interests are related generally to computer design and implementation. His research group is currently working on designing special-purpose computers for generating very realistic computer graphic images using a technique called ray tracing. His interests in computer design and engineering extend from the integration of software and hardware at a high level, to the detailed design of the processor, to its VLSI implementation. He has also done significant research in the area of asynchronous system and circuit design including a framework for syntax-directed design of asynchronous systems, and full processor designs using asynchronous techniques. He has published close to 100 articles and technical reports in areas ranging from computer design and VLSI, to arts/technology collaborations, and computer science education.


  • Capacitated Covering, Scheduling to Minimize Energy and Min Edge Cost Flows - a natural convergence.


    Speaker:

    Samir Khuller, University of Maryland

    Date:Tue,22 Dec, 2015, 4PM
    Time:16:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Traditional scheduling algorithms, especially those involving job scheduling on parallel machines, make the assumption that the machines are always available and try to schedule jobs to minimize specific job related metrics. Since modern data centers consume massive amounts of energy, we consider job scheduling problems that take energy consumption into account, turning machines off, especially during periods of low demand. The ensuing problems relate very closely to classical covering problems such as capacitated set cover, and we discuss several recent results in this regard. Finally we show how to view all of these problems through the common lens of min edge cost flows.

    This is a survey talk on several papers, some recent, and some not so recent.


  • Abstract Interpretation of Binaries with Jakstab


    Speaker:

    Johannes Kinder from Royal Holloway, University of London.

    Date:Thus, 17th Dec, 11AM
    Time:11:00:00 (IST)
    Venue:501, Bharti Building
    Abstract:

    Static program analysis has several interesting potential applications in security, from detecting malware to finding vulnerabilities in software. In many of these scenarios, we have to work with a possibly malicious binary executable and do not have access to source code, which poses several challenges to classical static analysis.

    In my talk, I discuss how to overcome these challenges and introduce the integrated approach to disassembly, control flow reconstruction, and static analysis taken in the Jakstab abstract interpretation framework for binaries. Jakstab computes indirect jump targets on the fly and does not require a separate disassembly stage. I then show how the framework can be extended to mix over- and under-approximate reasoning to resolve indirect jumps based on targets observed at runtime. Surprisingly, it is still possible to precisely characterize the guarantees generated by such an analysis.


    Bio:

    Johannes Kinder is a Lecturer (Assistant Professor) in the Department of Computer Science at Royal Holloway, University of London. His research focuses on assessing and improving the reliability and security of software, in particular with the help of automated tools. This requires him to cross back and forth between the fields of programming languages, software engineering, and systems security. His principal interests lie in program analysis for real-world systems, runtime monitoring and instrumentation, and specification and detection of malware. Johannes holds a doctorate in computer science from TU Darmstadt, a Diploma degree from TU Munich, Germany, and he completed a postdoc at EPFL in Lausanne.



Copyright © 2017 Department of Computer Science and Engineering. All Rights Reserved.