|Technical program available here. PDF version here.|
The IEEE Workshop on Statistical Signal Processing is a unique meeting that brings members of the IEEE Signal Processing Society together with researchers from allied fields such as statistics and bioinformatics. The scope of the workshop includes basic theory, methods and algorithms, and applications of statistics in signal processing. The table below outlines some of the topics to be covered.
|Adaptive systems and signal processing
||Bioinformatics and genomic signal processing
|Monte Carlo methods
||Automotive and industrial applications
|Detection and estimation theory
||Array processing, radar and sonar
|Distributed signal processing
||Communication systems and networks
|Learning theory and pattern recognition
|Multivariate statistical analysis
||Information forensics and security
|System identification and calibration
||New methods, directions and applications.
|Time-frequency and time-scale analysis
||Biosignal processing and medical imaging
|Statistical image analysis and imaging
The workshop will be held on August 26-29, 2007, in Madison, Wisconsin, the home of the University of Wisconsin. Madison is a vibrant city situated on a narrow isthmus between two large lakes. The workshop and lodging will be co-located at the spectacular Frank Lloyd Wright inspired Monona Terrace Convention Center.
Using as the upper bound the 240 papers included the 2005 SSP workshop technical program, and as the lower bound the 160 papers included in that of the 2003 SSP workshop, we estimate the expected number of participants to be about 250 for the 2007 workshop. The schedule and program will closely follow the model of past workshops.
The workshop will kick-off with a welcoming reception on Sunday, August 26. Technical sessions will begin on the morning of Monday, August 27, and finish on the afternoon of Wednesday, August 29. The workshop itself will be organized into relatively small poster sessions consisting of 10-20 papers per session. In addition, we are planning to include four or five plenary lectures to be held prior to the morning and afternoon poster sessions each day.
High-Resolution Mapping of Chromatin Dynamics in Yeast
One of the central questions in molecular biology is
understand the mechanisms regulating the expression of genes. The advances in technology that allow to measure gene expression in a genome-wide manner open the door for understanding how the "usage instruction" for transcription are encoded in regulatory DNA sequences surrounding the gene and how these sequences effect the binding of transcription factors that modulate expression. A less understood, yet crucial, aspect of all processes that involve DNA is chromatin -- the protein complex around which DNA is packed inside eukaryotic cells. The chromatin has been under much scrutiny in the recent years, with results that show that chromatin can be modified to store epigentic information that marks different regions of the DNA and can change in response to the state of the cell. In this talk I will describe ongoing investigation to examine chromatin state as coded by 12 covalent modifications to chromatin proteins and the dynamics in terms of turnover rates in a genome-wide manner using high-resolution tiling arrays. I will discuss the implications of these results on the information encoded by the chromatin state and the interplay between these modifications and transcription.
Order and disorder in the emotional brain
University of Wisconsin, Madison
This talk will provide an overview of recent research from my laboratory on the fundamental nature of affective style and the role of emotion regulation in modulating affective style. Basic research on the brain mechanisms underlying both normal forms of emotional regulation will be presented that illustrate how individual differences in the neural correlates of emotion regulation are associated with downstream peripheral biological indices that may be relevant to health. Abnormalities in emotion regulation in both mood disorders and autism will also be featured. The talk will end with research on the role of mental training and plasticity that raises the important possibility that affective style can be transformed in a positive direction and that such transformation is associated with systematic changes in the brain.
Richard J. Davidson Ph.D is William James and Vilas Research Professor of Psychology and Psychiatry and Director, W.M. Keck Laboratory for Functional Brain Imaging and Behavior at the University of Wisconsin-Madison. He received his doctorate from Harvard University in psychology and has been at Wisconsin since 1984. Davidson is internationally renowned for his research on the neural substrates of emotion and emotional disorders. He is the recipient of numerous awards for his research including a National Institute of Mental Health Research Scientist Award, a MERIT Award from NIMH, an Established Investigator Award from the National Alliance for Research in Schizophrenia and Affective Disorders (NARSAD), the William James Fellow Award from the American Psychological Society, and the Hilldale Award from the University of Wisconsin-Madison. He was the 1997 Distinguished Scientific Lecturer for the American Psychological Association. He served as a Core Member of the MacArthur Foundation Research Network in Mind-Body Interaction, is currently a Core Member of the MacArthur Foundation Mind-Brain-Body and Health Initiative and a member of the Board of Scientific Counselors, NIMH. In 2001-02 he served on the National Academy of Science Panel to evaluate the validity of the polygraph. He was the year 2000 recipient of the most prestigious award given by the American Psychological Association for lifetime achievement-the Distinguished Scientific Contribution Award. He has published more than 150 articles, many chapters and reviews and edited 12 books.
California Institute of Technology
Abstract. Conventional wisdom and common practice in acquisition and
reconstruction of images from frequency data follow the basic
principle of the Nyquist density sampling theory. This principle
states that to reconstruct an image, the number of Fourier samples we
need to acquire must match the desired resolution of the signal or image,
i.e. the number of samples in the signal or pixels in the image. This talk will survey an
emerging theory that goes by the name of “compressive sampling” and which says that
this conventional wisdom is inaccurate. Perhaps surprisingly, it is
possible to reconstruct signals and images of scientific interest
accurately and sometimes even exactly from a number of samples far smaller than the desired resolution. Compressive sampling has far reaching implications. For example, it
suggests the possibility of new data acquisition protocols that
translate analog information into digital form with fewer sensors than
what was considered necessary. This new sampling theory may come to
underlie procedures for sampling and compressing data simultaneously.
In this talk, we will provide some of the key mathematical insights
underlying this new theory, and explain some of the interactions
between compressive sampling and other fields such as statistics,
information theory, coding theory, and theoretical computer science.
Emmanuel Candes received his B.Sc. degree from Ecole Polytechnique (France) in 1993, and the Ph.D. degree in statistics from Stanford University in 1998. He is the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. Prior to joining Caltech, he was an Assistant Professor of Statistics at Stanford University ('98-'00). His research interests are in computational harmonic analysis, multiscale analysis, approximation theory, statistical estimation, and detection, with applications to the imaging sciences, signal processing, scientific computing, and inverse problems. Other topics of interest include theoretical computer science, mathematical optimization, and information theory. Dr. Candes received the Third Popov Prize in Approximation Theory in 2001, and the DOE Young Investigator Award in 2002. He was selected as an Alfred P. Sloan Research Fellow in 2001. He co-authored a paper that won the Best Paper Award of the European Association for Signal, Speech, and Image Processing (EURASIP) in 2003. He was selected as the main lecturer at the NSF-sponsored 29th Annual Spring Lecture Series in the Mathematical Sciences in 2004 and as the Aziz Lecturer in 2007. He has also given plenary addresses at major international conferences. In 2005, he was awarded the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing by SIAM. Finally, he is the recipient of the 2006 Alan T. Waterman Medal awarded by the US National Science Foundation.
Massachusetts Institute of Technology
The coded aperture camera and the random camera
I'll describe two rather unconventional cameras. (1) The coded aperture camera is a conventional SLR camera but with a
coded pattern of holes in the aperture. This gives a depth-dependent
blur kernel which we design to be easy to identify. Using a sparse image
prior, we can estimate, from the captured image, both an all-focus
image and (roughly) the depth everywhere. (2) The "random camera" has randomly placed mirrors
replacing the lens in a handheld camera. Machine learning methods are
critical for both camera calibration and image reconstruction from the
sensor data. We develop the theory and compare two different methods
for calibration and reconstruction: an MAP approach, and basis pursuit
from compressive sensing. We show proof-of-concept experimental
results, showing successful calibration and image reconstruction. We
illustrate the potential for super-resolution and 3D imaging.
Joint work with Anat Levin, Rob Fergus, Fredo Durand, and Antonio Torralba
William T. Freeman is Professor of Electrical Engineering and Computer Science at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, joining the faculty in 2001. From 1992 - 2001 he worked at Mitsubishi Electric Research Labs (MERL), in Cambridge, MA, most recently as Sr. Research Scientist and Associate Director. He studied computer vision for his PhD in 1992 from the Massachussetts Institute of Technology, and received a BS in physics and MS in electrical engineering from Stanford in 1979, and an MS in applied physics from Cornell in 1981. His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. In 1997, he received the Outstanding Paper prize at the Conference on Computer Vision and Pattern Recognition for work on applying bilinear models to "separating style and content". Previous research topics include steerable filters and pyramids, the generic viewpoint assumption, color constancy, and computer vision for computer games. He holds 25 patents. From 1981 - 1987, he worked at the Polaroid Corporation . There he co-developed an electronic printer (Polaroid Palette) , and developed algorithms for color image reconstruction which are used in Polaroid's electronic camera . In 1987-88, Dr. Freeman was a Foreign Expert at the Taiyuan University of Technology , P. R. of China. Dr. Freeman was an Associate Editor of IEEE Trans. on Pattern Analysis and Machine Intelligence (IEEE-PAMI), and a member of the IEEE PAMI TC Awards Committee. He is active in the program or organizing committees of Computer Vision and Pattern Recognition (CVPR), the International Conference on Computer Vision (ICCV), Neural Information Processing Systems (NIPS), and SIGGRAPH. He was the program co-chair for ICCV 2005.
Carnegie Mellon University
Problems in Biological Imaging: Opportunities for Signal Processing
The question I would like to help answer is: What is the role and what can imaging do for systems biology?
In recent years, the focus in biological sciences has shifted from understanding single parts of larger systems, sort of vertical approach, to understanding complex systems at the cellular and molecular levels, horizontal approach. Thus the revolution of "omics" projects, genomics and now proteomics. Understanding complexity of biological systems is a task that requires acquisition, analysis and sharing of huge databases, and in particular, high-dimensional databases. For example, in the current project on location proteomics, the fluorescence microscopy data sets can have a dimension as high as 5: two spatial dimensions, z-stacks, time series and different-color channels (different color probes for different proteins). Processing such huge amount of bioimages visually by biologists is inefficient, time-consuming and error-prone. Therefore, we would like to move towards automated, efficient and robust processing of such bioimage data sets. Moreover, some information hidden in the images may not be readily visually available. Thus, we do not only help humans by using sophisticated algorithms for faster and more efficient processing but also because new knowledge is generated through use of such algorithms.
The ultimate dream is to have distributed yet integrated large bioimage databases which would allow researchers to upload their data, have it processed, share the data, download data as well as platform-optimized code, etc, and all this in a common format, something akin to the DICOM format for clinical imaging.
To achieve this goal, we must draw upon a whole host of sophisticated tools from signal processing, machine learning and scientific computing. While such tools are widely present in clinical (medical) imaging, they are not as widespread in imaging of biological systems at cellular and molecular levels. This is a huge challenge and requires integration of interdisciplinary teams.
I will address some of these issues in this presentation, especially those where signal processing expertise can play a significant role.
Jelena Kovačević is a Professor of Biomedical Engineering and Electrical and Computer Engineering and the Director of the Center for Bioimage Informatics at Carnegie Mellon University. Her research interests include bioimaging as well as multiresolution techniques such as wavelets and frames. She received the Dipl. Electr. Eng. degree from the EE Department, Univ. of Belgrade, Yugoslavia, in 1986, and the MS and PhD degrees from Columbia Univ., New York, NY, in 1988 and 1991, respectively. From 1991-2002, she was with Bell Labs, Murray Hill, NJ. She was a co-founder and Technical VP of xWaveforms, based in New York City, NY. She was also an Adjunct Professor at Columbia Univ. In 2003, she joined Carnegie Mellon Univ. She is a Fellow of the IEEE and a coauthor (with Martin Vetterli) of the book Wavelets and Subband Coding (Englewood Cliffs, NJ: Prentice Hall, 1995). She co-authored the paper for which Aleksandra Mojsilovi? received the Young Author Best Paper Award. Her paper on multidimensional filter banks and wavelets (with Martin Vetterli) was selected as one of the Fundamental Papers in Wavelet Theory. She received the Belgrade October Prize in 1986 and the E.I. Jury Award at Columbia Univ. in 1991. She was the Editor-in-Chief of the IEEE Trans. on Image Processing. She served as an Associate Editor of the IEEE Trans. on Signal Processing, as a Guest Co-Editor (with Ingrid Daubechies) of the Special Issue on Wavelets of the Proceedings of the IEEE, Guest Co-Editor (with Martin Vetterli) of the Special Issue on Transform Coding of the IEEE Signal Processing Magazine and Guest Co-Editor (with Robert F. Murphy) of the Special Issue on Molecular and Cellular Bioimaging of the IEEE Signal Processing Magazine. She is/was on the Editorial Boards of the Foundations and Trends in Signal Processing, SIAM book series on Computational Science and Engineering, Journ. of Applied and Computational Harmonic Analysis, Journ. of Fourier Analysis and Applications and the IEEE Signal Processing Magazine. From 2000-2002, she served as a Member-at-Large of the IEEE Signal Processing Society Board of Governors. She is the Chair of the Bio Imaging and Signal Processing Technical Committee. She was the General Chair of ISBI 06, General Co-Chair (with Vivek Goyal) of the DIMACS Workshop on Source Coding and Harmonic Analysis and General Co-Chair (with Jan Allebach) of the Ninth IMDSP Workshop. She is/was a plenary/keynote speaker at the Statistical Signal Processing Workshop 07, Wavelet Workshop 06, NORSIG 06, ICIAR 05, Fields Workshop 05, DCC 98 as well as SPIE 98.
Tutorials will be held on Sunday, August 26.
Genomic Signal Processing: Issues in Engineering Molecular Medicine
Edward R. Dougherty
Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX
Computational Biology Division, Translational Genomics Research Institute, Phoenix, AZ
Department of Pathology, University of Texas M. D. Anderson Cancer Center, Houston, TX
Abstract: Systems medicine aims at basing diagnosis and treatment on a systems level understanding of molecular interaction, both intra-and inter-cellular. Ultimately, the enterprise rests on characterizing the interaction of the macromolecules constituting cellular machinery. Genomics, a key driver in this enterprise, involves the study of large sets of genes and proteins, with the goal of understanding systems, not simply components. In this vein, Genomic Signal Processing (GSP) has been defined as the analysis, processing, and use of genomic signals for gaining biological knowledge and the translation of that knowledge into systems-based applications. The major goal of translational genomics is to characterize genetic regulation, and its effects on cellular behaviour and function, thereby leading to a functional understanding of disease and the development of systems-based medical solutions. A related goal is to discover families of genes whose products (messenger RNA and protein) can be used to classify disease, thereby leading to molecular-based diagnosis and prognosis. GSP requires the development and use of novel models and methods specifically designed to capture the biological mechanisms of operation and distributed regulation at work within the cell. In particular, it is necessary to develop nonlinear dynamical models that adequately represent genomic regulation and to develop mathematically grounded diagnostic and therapeutic tools based on these models. This talk discusses the relation of GSP to translational genomics, the current level of understanding, research goals, key obstacles to overcome, and the central role that signal processing and related engineering disciplines will play in molecular-based systems medicine.
Edward R. Dougherty is a Professor in the Department of Electrical Engineering at Texas A&M University in College Station, Tex, Director of the Genomic Signal Processing Laboratory at Texas A&M University, and Director of the Computational Biology Division of the Translational Genomics Research Institute in Phoenix, Ariz. He holds a Ph.D. degree in mathematics from Rutgers University and an M.S. degree in Computer Science from Stevens Institute of Technology. He is the author of twelve books, Editor of five others, and author of more than one hundred and ninety journal papers. He is an SPIE Fellow, is a Recipient of the SPIE Presidents Award, and has served as an Editor of the Journal of Electronic Imaging for six years. He has contributed extensively to the statistical design of nonlinear operators for image processing and the consequent application of pattern recognition theory to nonlinear image processing. His current research is focused in genomic signal processing, with the central goal being to model genomic regulatory mechanisms for the purposes of diagnosis and therapy.
Information Security: How Ugly Can It Be?
Video and Image Processing Laboratory
School of Electrical and Computer Engineering
West Lafayette, Indiana USA
This talk will overview various aspects of security including basic data security (e.g. cryptography and authentication) and multimedia security (e.g. watermarking and data hiding). Applications including digital rights management (DRM), content protection, and device forensics will be described. Emphasis will be placed on both tutorial overviews and open research problems.
Edward J. Delp was born in Cincinnati, Ohio. He received the B.S.E.E. (cum laude) and M.S. degrees from the University of Cincinnati, and the Ph.D. degree from Purdue University. In May 2002 he received an Honorary Doctor of Technology from the Tampere University of Technology in Tampere, Finland. From 1980-1984, Dr. Delp was with the Department of Electrical and Computer Engineering at The University of Michigan, Ann Arbor, Michigan. Since August 1984, he has been with the School of Electrical and Computer Engineering and the School of Biomedical Engineering at Purdue University, West Lafayette, Indiana. In 2002 he received a chaired professorship and currently is The Silicon Valley Professor of Electrical and Computer Engineering and Professor of Biomedical Engineering. His research interests include image and video compression, multimedia security, medical imaging, multimedia systems, communication and information theory. Dr. Delp has also consulted for various companies and government agencies in the areas of signal, image, and video processing, pattern recognition, and secure communications. He has published and presented more than 300 papers. Dr. Delp is a Fellow of the IEEE, a Fellow of the SPIE, a Fellow of the Society for Imaging Science and Technology (IS&T), and a Fellow of the American Institute of Medical and Biological Engineering. In 2004 he received the Technical Achievement Award from the IEEE Signal Processing Society for his work in image and video compression and multimedia security. He is a member of Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, Sigma Xi, and ACM. From 1997-1999 he was Chair of the Image and Multidimensional Signal Processing (IMDSP) Technical Committee of the IEEE Signal Processing Society. From 1994-1998 he was Vice-president for Publications of IS&T. He was Co-Chair of the SPIE/IS&T Conference on Security, Steganography, and Watermarking of Multimedia Contents that was held in January 1998-2006. Dr. Delp was the General Co-Chair of the 1997 Visual Communications and Image Processing Conference (VCIP) held in San Jose. He was Program Chair of the IEEE Signal Processing Society's Ninth IMDSP Workshop held in Belize in March 1996. He was General Co-Chairman of the 1993 SPIE/IS&T Symposium on Electronic Imaging. Dr. Delp was the Program Co-Chair of the IEEE International Conference on Image Processing that was held in Barcelona in 2003. From 1984-1991 Dr. Delp was a member of the editorial board of the International Journal of Cardiac Imaging. From 1991-1993, he was an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence. From 1992-1999 he was a member of the editorial board of the journal Pattern Recognition. From 1994-2000, Dr. Delp was an Associate Editor of the Journal of Electronic Imaging. From 1996-1998, he was an Associate Editor of the IEEE Transactions on Image Processing. He is also co-editor of the book Digital Cardiac Imaging published by Martinus Nijhoff. In 1990 he received the Honeywell Award and in 1992 the D. D. Ewing Award, both for excellence in teaching. In 2001 Dr. Delp received the Raymond C. Bowman Award for fostering education in imaging science from the Society for Imaging Science and Technology (IS&T). In 2004 he received the Wilfred Hesselberth Award for Teaching Excellence. In 2000 Dr. Delp was selected a Distinguished Lecturer of the IEEE Signal Processing Society and in 2002 and 2006 he was awarded Nokia Fellowships. During the summers of 1998, 1999, 2001, 2002, 2003, 2005, and 2006 Dr. Delp was a Visiting Professor at the Tampere International Center for Signal Processing at the Tampere University of Technology in Finland. Dr. Delp is a registered Professional Engineer.
Sensor Networks and Data Fusion
John Fisher III
Massachusetts Institute of Technology
Data processing in sensor networks is not easily characterized as a
problem of sensing, signal processing, inference, information theory,
communications, decision-making, control, computing, or networking --
it involves all of these. The combination of these issues invokes
fundamental tradeoffs arising from the distributed nature of
computation and deployment coupled with limitations on communications,
computational and energy typical of many sensor
networks. Probabilistic and information-theoretic methods provide a
natural framework for addressing many inference problems in sensor
networks; however, in addition to the informational utility of a
measurement or set of measurements, one must also consider the
resources necessary to acquire and fuse those measurements into a
probabalistic model. Additionally, information sharing in a sensor
networks necessarily involves approximation. Traditional measures of
distortion are not sufficient to characterize the quality of
approximation as they do not address in an explicit manner the
resulting impact on inference which is at the core of many data fusion
In this tutorial we will discuss many of the issues within the context
a variety of data fusion problems. We will emphasize the use of
probabilistic and information-theoretic methods for inference. We will
present fundamental and computable bounds regarding the rate at which
sensor networks can acquire information. Use of these bounds leads to
computationally tractable sensor planning algorithms which incorporate
resource expenditures explicitly. We will discuss approximate
message-passing schemes suitable for inference in sensor networks
which consider the impact of distortion on information sharing as it
relates to inference. Finally, we will discuss the issue of learning
in sensor networks under communications constraints, a problem which
arises in situations where the underlying probabilistic model is
unknown or only partially specified.
John Fisher is Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory. His research focuses on information-theoretic approaches to machine learning, computer vision, and signal processing. Application areas include signal-level approaches to multi-modal data fusion, signal and image processing in sensor networks, distributed inference under resource constraints, resource management in sensor networks, and analysis of seismic and radar images. In collaboration with the Surgical Planning Lab at Brigham and Women's Hospital, he is developing nonparametric approaches to image registration and functional imaging. He received a BS and MS in Electrical Engineering at the Univsersity of Florida in 1987 and 1989, respectively. He earned a PhD in Electrical and Computer Engineering in 1997.
Justin Romberg - Georgia Institue of Technology
Michael Wakin - Caltech
From decades of research in signal processing, we have learned that
having a good signal representation can be key for tasks such as
compression, denoising, and restoration. The new theory of Compressed
Sensing (CS) shows us how a good representation can fundamentally aid
us in the acquisition (or sampling) process as well. In this tutorial
will outline the main theoretical results in CS and discuss how the
ideas can be applied in next-generation acquisition devices.
The CS paradigm can be summarized neatly: the number of measurements
(e.g., samples) needed to acquire a signal or image depends more on
its inherent information content than on the desired resolution
(e.g., number of pixels). The CS theory typically requires a novel
measurement scheme that generalizes the conventional signal
acquisition process: instead of making direct observations of the
signal, for example, an acquisition device encodes it as a series of
random linear projections.
The theory of CS, while still in its developing stages, is far-reaching and draws on subjects as varied as sampling theory, convex
optimization, source and channel coding, statistical estimation,
uncertainty principles, and harmonic analysis. The applications of CS
range from the familiar (imaging in medicine and radar, high-speed
analog-to-digital conversion, and super-resolution) to truly novel
image acquisition and encoding techniques.
Justin Romberg attended Rice University, where he received the BS (1997), MS (1999), and PhD (2003) degrees in electrical engineering. His graduate work centered on multiscale geometrical models for image processing. In fall of 2003, he joined the Applied and Computational Mathematics Department at Caltech, where he worked on the theoretical foundations of compressive sampling. He spent the fall of 2004 as a Fellow at UCLA's Institute for Pure and Applied Mathematics. In the fall of 2006, he joined the faculty at Georgia Tech. Dr. Romberg is also a consultant for the TV show "Numb3rs".
Michael Wakin received the B.S. degree in electrical engineering and the B.A. degree in mathematics in 2000 (summa cum laude), the M.S. degree in electrical engineering in 2002, and the Ph.D. degree in electrical engineering in 2007, all from Rice University. He was an NSF Mathematical Sciences Postdoctoral Research Fellow in the Department of Applied and Computational Mathematics at the California Institute of Technology and is currently an Assistant Professor at the University of Michigan in Ann Arbor. His research interests include sparse, geometric, and manifold-based models for signal and image processing, approximation, compression, compressive sensing, and dimensionality reduction.
Hamid Krim (North Carolina State University)
Robert Nowak (University of Wisconsin-Madison)
Technical Program Co-Chairs:
Richard Baraniuk (Rice University)
Akbar Sayeed (University of Wisconsin-Madison)
Special Sessions Chair: Ilya Pollak (Purdue University)
Local Arrangements: Yu Hen Hu (University of Wisconsin-Madison)
Finances: Barry Van Veen (University of Wisconsin-Madison)
Publicity: Rebecca Willett (Duke University)
Publications: Mark Coates (McGill)
International Liasons: Patrice Abry (Ecole Normale Supérieure de Lyon, France)
- Patrice Abry
Kenneth E. Barner
- Olivier Cappé
- Jean-Francois Chamberland
- Chong Yung Chi
- Leslie Collins
- Victor DeBrunner
- Konstantinos I. Diamantaras
- Petar M. Djuric
- Pier Luigi Dragotti
- Monique Fargues
- John Fisher
- Michael Gastpar
- Alex Gershman
- Paulo Goncalves
- Vivek K Goyal
- Fredrik Gustafsson
- Robert Heath
- Franz Hlawatsch
- Doug Jones
- Nick Kingsbury
- Farinaz Koushanfar
- Vikram Krishnamurthy
- Jeff Krolik
- Persefoni Kyritsi
- Ubli Mitra
- Eric Moulines
- Ramesh Neelamani
- Ilya Pollak
- Antonia Papandreou-Suppappola
- Mike Rabbat
- Phillip A. Regalia
- Vinay Ribeiro
- Justin Romberg
- Venkatesh Saligrama
- Ali H. Sayed
- Anna Scaglione
- Phil Schniter
- Andrew C. Singer
- Trac D. Tran
- Joel Trop
- Xiaodong Wang
- Douglas Williams
- Dapeng Oliver Wu
- Daniel Xu
- Arie Yeredor
- Qing Zhao
- Abdelhak M. Zoubir