0 items Sign Up Sign In

Lossless Data Compression Methods and Compression Performance Metrics

Training Options

  •   Duration: 90 Minutes  
  • Recorded Access recorded version only for one participant; unlimited viewing for 6 months ( Access information will be emailed 24 hours after the completion of live webinar)
    Price: US$289.00
  • Refund Policy


The webinar covers the foundations of data compression in the context of the broader subject of information theory. The learner will gain valuable insight into how lossless compression works and why the techniques are robust and trustworthy. In the historical perspective, data compression concepts have arrived in a timely manner to aid the expanding data storage and wireless communication needs of the modern era. The measure of information is defined probabilistically, it is formally defined as entropy, a term that Claude Shannon borrowed from quantum mechanics.

The lesson will work through examples of Huffman coding and dictionary methods that fuel the popular dominant methods of Lemple and Ziv. Transform methods, such as the Lemple Ziv Welch provide advantages for compressing data with measurable periodicity. Closely related to data compression methods, data deduplication reduces the overall redundancy across one or more data storage systems. Applications of lossless data compression will be discussed and a survey of available data compression technologies will be covered. The speaker will present elements of his own research in data compression performance and tradeoffs.

Why should you Attend:The information age continues to yield more and more data. The Internet, Big Data, Cloud Computing, and data storage requirements are measured in ever-increasing scales of Terabytes, Petabytes, Exabytes, and Zettabytes. Does data compression provide a solution to stem this ever-expanding flood? Can you trust data compression? Doesn't it put your data “at risk”? Are some types of data more compressible than other data? How do the methods of lossless data compression (covered) differ from lossy data compression? Take away from this session a meaningful understanding of lossless compression, its limitations, and the tradeoffs between storage reduction and increased processing required to compact and re-expand data.

Areas Covered in the Session:
  • Entropy as the measure of information
  • Shannon's Source Coding Theorem
  • Huffman Trees and Huffman Coding
  • Arithmetic coding
  • Dictionary methods
  • Transform methods
  • Data deduplication
  • Implementation considerations, Open Source software, Hardware compression chips
  • Performance

Who Will Benefit:
  • System Architects
  • Programmers
  • Software Developers
  • Data Analysts
  • Technical Managers
  • IT Project Manager
  • IT Managers
  • CIOs
  • CTOs
  • Database Administrators
Dr. Raymond Moberly is a consulting subject matter expert in the field of Information Theory, knowledgeable about principles of error correction, cryptography, and data compression. He has worked extensively in the field of software defined radio. He has lead development efforts for embedded software and firmware on microcontrollers, digital signal processors, and field programmable gate arrays. He has extensive experience in the verification and validation of systems through all development phases including formal qualification testing. He enjoys profiling software in order to to analyze and better optimize code code performance. Raymond holds a bachelor's degree in engineering from Caltech, masters in applied mathematics from San Diego State University, and a doctorate in computational science from the Claremont Graduate University.

Related Webinars:

No Related Webinars Available

Back to Top