Arnold’s cat map

Recently we delivered a project for a US big data analytics and visualization company. A part of the project employed so called Arnold’s cat map. Since Arnold’s cat map is a very interesting mathematical procedure with a large number of potential of applicability, we’d like to introduce it to the image processing and machine vision community.

In general, Arnold’s cat map is a chaotic map from the torus into itself. It is named after Vladimir Arnold, who demonstrated its effects in the 1960s using an image of a cat (see below).

 

Without going into mathematical details, in this example, we see that after 300 iterations we are arriving at the original image. It is possible to define a discrete analogue of the cat map. One of this map’s features is that image being apparently randomized by the transformation but returning to its original state after a number of steps.

This is called Arnold’s cat map. Fore more information see http://en.wikipedia.org/wiki/Arnold%27s_cat_map

Markets Gaining Benefits of GPSME Project

As mentioned in previous blogs. at ImageMetry, we are co-working on development of a toolkit (called GPSME toolkit) that improves speed performance of software products by instant converting CPU source codes to GPU (graphic cards) source codes. The project is based on cooperation of 4 companies and 2 universities and supported by REA-Research Executive Agency of EU. More information on this project can be found at www.gp-sme.eu

Our market research revealed numerous industries that will directly benefit from the ability to automatically convert their written CPU code into GPU implementation with an expectation of significant performance gain. For instance: Bioinformatics, Computational Finance, Computational Fluid Dynamics, Data Mining, Defense, Electronic Design Automation, Imaging and Computer Vision, Material Science, Medical Imaging, Molecular Dynamics, Numerical Analysis, Physics, Quantum Chemistry, Oil and Gas/Seismic, Structural Mechanics, Visualization and Docking, Weather and Climate.

We believe that GPSME toolkit will available to public in September 2013. More information can be found at www.gp-sme.eu

 

Example of GPU Usage at ImageMetry and Verifeyed

A remarkable growth of digital images and videos has been observed in the past decade. The question now is, how much can we trust these images and videos? One of our products is called Verifeyed which is an image and video forgery detection software which can be downloaded from www.verifeyed.com .

Verifeyed is a leading innovator and technology provider for image forensics field. The mission of the product is supporting widespread use of digital photos and videos by increasing their credibility and reliability in the business world through deployment of the latest research results. The technology does not use any watermarks or signatures, which is regarded as a significant technical advantage.

Most of methods we use at Verifeyed are based on complex mathematical and computational methods.For example, a cloning detector, used to identify a part of the image copied and pasted with the typical purpose of hiding an important object or region of the image, operates as follows: (1) Tiling the image with millions of small overlapping blocks; (2) Invariant representation of the overlapping blocks using Fourier-based and Moment-based features [Flusser98]; (3) Kd-tree representation and blocks similarity analysis. Despite the strong detection ability of the cloning detection method, it requires an average several minutes to process typical images which imposes a strong limitation on the usability of the method at sites where thousands of images have to be processed every day.

Another technique detects the altered regions of JPEG images by identifying inconsistencies and change points of the discrete cosine transform coefficients of the JPEG images. The algorithm applies millions of discrete cosine transforms to different sections of the image before transforming the data to a set of one-dimensional statistics and
measuring the distance from the homogeneity conditions (homogeneity conditions are based on JPEG quantization artifacts and image noise levels). To detect the change points, a global approach where all the change points are detected simultaneously is needed; for typical images of 1024×768 this takes around 40 minutes.

Since algorithms in the current Verifeyed system have a strongly parallel nature and thus have great potential for GPU acceleration.  Thus, here, the GPU technique supports major performance improvement of mentioned methods and allow us to use advanced detection techniques and generate results just in a few milliseconds or seconds.

Specifically, we use the GPSME toolkit to rapidly decrease the cost of converting CPU programs into its GPU versions, resulting in noticeably lower execution times for end-users and allowing Verifeyed to bring more cutting-edge image forensics technologies to real-life markets like media, crime investigation, and insurance. This has a significant positive influence to the market position of the company.

Advances of GPUs

As mentioned in the previous blog, The GPSME project would help the SMEs have an easy access to the latest technical advancements of GPU through its toolkit without squandering resources. The toolkit would offer semi-automatic source code translation from CPU to GPU that would help in higher performance benefits. The project is based on cooperation of 4 companies and 2 universities and supported by REA-Research Executive Agency of EU. More information on this project can be found at www.gp-sme.eu

The intrinsically parallel GPU has always been a processor with ample computational resources and the capability of offering multi-thread processing. GPUs now provide better performance than CPUs due to their highly data parallel nature and the ability to achieve higher arithmetic intensity.

The major inhibiting factors on GPU use have previously been low on-board memory and poor double-precision performance. These have largely been overcome in the current generation of GPUs and GPU clusters, with the new generation of NVIDIA GPUs (codename Fermi) having an 8-fold improvement in performance at double precision, which further widens the performance gap between them and CPUs. Based on the Fermi architecture, the new NVIDIA Tesla 20-series offers off-the-shelf GPU cluster computing, delivering equivalent performance at 1/20th the power consumption and 1/10th the cost compared to the latest quad-core CPU.

Another significant factor is that GPU computational power has become inexpensive and widely available in many moderate computers with basic configurations (e.g. desktop PCs, laptops). The typical latest-generation card costs only a few hundreds euros, and these prices drop rapidly as new hardware emerges. Moreover, low-cost GPU clusters are now commercially available that provide exceptional computing power on the desktop. For example, the latest-generation NVIDIA Tesla S2050 4-GPU cluster retails at around €8,500.The parallel nature of the GPUs can provide vast speed gains for applications in which computational requirements are large and parallelism is substantial. Given their wide availability, GPUs are particularly suited to many SME related applications that target public users and require considerable level of heavy computation with limited resources and programming capacity. Even if the final target is a remote supercomputer, early testing and experimentation is often important before a major commitment is made to the use of high-performance computing facilities, which are normally expensive and require advance booking

 

A General Toolkit for “GPUtilisation” – Introduction of GPSME Project

At ImageMetry, we are co-working on development of a toolkit (called GPSME toolkit) that improves speed performance of software products by instant converting CPU source codes to GPU (graphic cards) source codes. The project is based on cooperation of 4 companies and 2 universities and supported by REA-Research Executive Agency of EU. More information on this project can be found at www.gp-sme.eu

The latest progression of Graphical Processing Unit power has not been completely exploited by most of the SMEs (Small and Medium Enterprises), may be because GPU programming is an arena that calls for professional expertise that is a lot different from usual training. GPSME is a toolkit that will offer the SMEs with an uncomplicated way to access the GPU power.

Now, for many of you who are new to the term, let’s first have a glance at what GPU is. GPUs are usually employed in embedded systems, mobile sets, computers, gaming consoles and lot more. A GPU also commonly referred to as visual processing unit is a specific circuit designed to speedily operate and modify memory in a way that would speed up the framing of images in a frame buffer proposed for displaying the output. Contemporary GPUs are way too competent at manipulating computer graphics along with the extremely parallel structure making them much more resourceful for the general purpose.

GPSME toolkit would facilitate them to enhance their company products in terms of both speed and quality without any major expense from their pocket. With the innovative GPSME toolkit in hand, the SMEs would be able to transform the present CPU code without wasting their valuable time and investing extra effort. It will also enable the implementation of advanced techniques with limited runtime and facilitate the SMEs to use complex and intricate computing models in their new products. An ideal aspect behind this is that it would reap them tons of commercial benefits along with augmenting their market position.

Optimal performance can be yielded out of gpuitication techniques that would adjust automatic parallelization to the latest GPU compute architecture to deliver finest performance. One thing that almost everyone would be familiar with is that the arenas of SME are different and GPSME would render a breakthrough that would offer enhanced performance at numerous other areas of application. SMEs that focus their applications on moderate platforms would find the technique to be highly suitable.

However, making use of GPU resources is not straightforward. While new GPU programming paradigms such as CUDA, OpenCL and GRAMPS have made GPU programming easier, an in-depth understanding of GPU architecture is still necessary to maximize GPUs’ benefits. Also, since the current products of the SME participants are CPU-based, they would need large resources to implement the CPU2GPU conversion manually. Recognizing both of these issues, GPSME project will produce a toolkit giving the SMEs easy access to the latest technical advance of GPUs without committing major resources. The GPSME toolkit provides automatic source code translation from CPU to GPU , which will result in great performance gains. The target technology will be standard GPU cards and off-the-shelf GPU clusters, which are moderately priced, readily available, and can run without the need to employ extra, specialised staff. The toolkit will operate by executing parallelizable loops in the program using GPUs, which suggests that the performance is gained under the current software architecture. This will make the adoption particularly attractive to the SMEs.

image

If predictions are to be believed, the upshots of GPSME would benefit numerous companies and enhance their competitiveness.

 

Danger of Photoshop (source: YouTube)

Many tutorials freely show how to edit digital images on Internet. For example, we found these two Youtube videos showing how to make a fake insurance claim by editing digital images. More information can be found at www.verifeyed.com

A New Promotion Video for VerifEyed

Image Tampering in Scientific Literature

Today, we face the problem of digital image forgeries even in scientific literature. For instance, the Journal of Cell Biology, a premier academic journal, estimates that around twenty five (25) percent of manuscripts accepted for publication contain at least one image that has been inappropriately manipulated. In many cases, the author is only trying to clean the background and the changes do not affect the scientific meaning of the results. However, the journal also estimates that roughly one (1) percent of figures are simply fraudulent.

One of the most famous cases of digital image forgeries in a scientific area was in 2004 when a team lead by the South Korean scientist Dr. Hwang Woo-Suk published their results in stem cell research in the journal Science.Their results showed the successful cloning of stem cells. This offered hope for new cures for diseases. Later, in 2005, one of the co- authors admitted that photographs in the paper had been tampered with. This resulted in, among other things, the resignation of Dr. Hwang from his position at Seoul National University.

Sarah Palin

This photomontage of Sarah Palin was widely dispersed across the Internet.

VerifEyed – Enterprise Edition

 

 

 

VerifEyed: NYC Next Idea winner

VerifEyed winner of NYC Next Idea. Mayor Bloomberg announced the winners of the competition at City Hall. www.nycedc.com/nextidea http://youtu.be/FRRAj6225-w

VerifEyed – You Can Trust Photos Again (Internet Dating)

Can an Image Forgery Win Pulitzer?

Another famous case of digital image manipulation is this widely published photograph taken during the 2003 Iraq war.

Brian Walski, who was covering the war for the Los Angeles Times, combined two of his Iraqi photographs into one to improve the composition and to create a more interesting image. The image shows an armed British soldier and Iraqi civilians under hostile fire in Basra. The soldier is gesturing at the civilians and urging them to seek cover. The standing man holding a young child in his arms seems to look at the soldier imploringly. It is the kind of picture that wins a Pulitzer. The tampering was discovered by an editor at The Hartford Courant, who noticed that some background people appeared twice in the photograph. It ended with the photographer being fired.

Image Authentication Without Using Watermarks and Signatures

Image authentication without using watermarks and signatures (called the passive or blind approach) is regarded as a new direction and does not need any explicit prior information about the image. The decision about the trustworthiness of an image being analyzed is based on fusion of the outcomes of separate image analyzers. Here, we provide an overview of some of methods (analyzers) which are employed to analyze digital images.

  • Detection of interpolation and resampling. When two or more images are spliced together to create high quality and consistent image forgeries, geometric transformations are almost always needed. These transformations, typically, are based on the resampling of a portion of an image onto a new sampling lattice. This requires an interpolation step, which typically brings into the signal statistical changes. Detecting these specific statistical changes may signify tampering.
  • Detection of near-duplicated image regions. Detection of duplicated image regions may signify copy-move forgery. In copy-move forgery, a part of the image is copied and pasted into another part of the same image typically with the intention to hide an object or a region.
  • Detection noise inconsistencies. The amount of noise in authentic digital images is typically uniformly distributed across an entire image and typically invisible to the human eye. Additive noise is a very commonly used tool to conceal the traces of tampering and is the main cause of failure of many active or passive authentication methods. Often by creating digital image forgeries, noise becomes inconsistent. Therefore, the detection of various noise levels in an image may signify tampering.
  • Detection of double JPEG compression. In order to alter an image, typically the image must be loaded onto a photo- editing software and after the changes are done, the digital image is re-saved. If the images are in the JPEG format, then the newly created image will be double or more times JPEG compressed. This introduces specific correlations between the discrete cosine transform (DCT) coefficients of image blocks. The knowledge of image’s JPEG compression history can be helpful in finding the traces of tampering.
  • Detection of inconsistencies in color filter array (CFA) interpolated images. Here, the hardware features of digital cameras are used to detect the traces of tampering. Many digital cameras are equipped with a single charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) sensor. Then, typically, the color images are obtained in conjunction with a color filter array. In these cameras, only a single color sample is captured at each pixel location. Missing colors are computed by an interpolating process, called CFA interpolation. This process introduces specific correlations between the pixels of the image, which can be destroyed by the tampering process.
  • Detecting inconsistencies in lighting. Different photographs are taken under different lighting conditions. Thus, when two or more images are spliced together to create an image forgery, it is often difficult to match the lighting conditions from the individual photographs. Therefore detecting lighting inconsistencies offers another way to find traces of tampering.
  • Detecting inconsistencies in perspective. When two or more images are spliced together, it is often difficult to maintain correct perspective. Thus, for instance, applying the principles from projective geometry to problems in image forgery detection can be also a proper way to detect traces of tampering.
  • Detecting inconsistencies in chromatic aberration. Optical imaging systems are not perfect and often bring different types of aberrations into an image. One of these aberrations is the chromatic aberration, which is caused by the failure of an optical system to perfectly focus light of different wavelengths. When tampered with, this aberration can become inconsistent across the image. This can be used as another way to detect image forgeries.

Image Integrity Verification by Data Hiding

The data hiding approach refers to a method of embedding secondary data into the primary multimedia sources. This is carried out mainly to fulfill authentication and tampering detection, copyright protection and distribution control. The idea of hiding information has a long history, likely to date back a couple of thousand years. In recent decades, techniques of adding some imperceptible data to multimedia sources received special attention from the research community. This many methods of data hiding developed into multimedia security applications in. Most of them are referred to as digital watermarking or Hash marks.

The main advantage of data hiding compared to digital signatures is that it gives the ability to associate the secondary data with the primary media in a seamless way. They are mostly imperceptible and travel with the host image. The data hiding approach can be divided further in several fields. Digital watermarking is the most popular one.

Proudly working with: