Due to the constant development of new and better imaging sensors which operate according to different physical principles, the need arises for a meaningful combination of all these imaging data. For many years, various research fields have been trying to integrate imagery of different modalities – not at least in the medical and geoscience communities – to facilitate a better understanding and interpretation of particular phenomena. This topic is also becoming increasingly applicable in archaeology, as imagery and data sources become more diverse and complex. In order to implement such an approach, a dedicated MATLAB toolbox TAIFU (the Toolbox for Archaeological Image Fusion) has been created. TAIFU serves as a platform for testing of current state-of-the-art image fusion methods and facilitates the development of new data integration routines. This toolbox is thus designed to benefit archaeological interpretive mapping of diverse prospection datasets. For example, the data from a magnetogram can be fused with an aerial image to aid the archaeologist in correlating feature locations for a more trustworthy information extraction and better interpretation of hidden geo-cultural features. TAIFU is currently programmed in such a way that the user has to load two images. Upon import, the toolbox verifies and stores the metadata of the images (such as georeferencing, Exif and IPTC tags). Subsequently, a variety of image fusion processes (some with additional options) can be chosen to create a fused image from the initial displayed images. As a pre- or post-fusion step, the user can also choose among a variety of contrast enhancement algorithms. When a useful result is obtained, the fused image can be saved with all the proper metadata embedded, although the latter can also be stored as a sidecar ASCII file. These new metadata do not only originate from both input images, they also contain data about the contrast enhancement and fusion algorithms that were used to obtain the final result. At this stage, TAIFU is still in development. So far, about thirty image fusion methods have been implemented next to five different image fusion metrics. Although a visual assessment of the result will normally determine which method is best suited for a specific case, these metrics can help to identify which fusion approaches preserve most information of both input images.