To date, aerial archaeologists generally apply simple rectification procedures or more expensive and time-consuming orthorectification algorithms to correct their aerial photographs in varying degrees for geometrical deformations induced by the topographical relief, the tilt of the camera axis and the distortion of the optics. Irrespective of the method applied, the georeferencing of the images is commonly determined with ground control points, whose measurement and identification is a time-consuming operation and often limits certain images from being accurately georeferenced. Moreover, specialised software, certain photogrammetric skills, and experience are required. Thanks to the recent advances in the fields of computer vision and photogrammetry as well as the improvements in processing power, it is currently possible to generate orthophotos of large, almost randomly collected aerial photographs in a straightforward and nearly automatic way. This paper presents a computer vision-based approach that is complemented by proven photogrammetric principles to generate orthophotos from a range of uncalibrated oblique and vertical aerial frame images. In a first phase, the method uses algorithms that automatically compute the viewpoint of each photograph as well as a sparse 3D geometric representation of the scene that is imaged. Afterwards, dense reconstruction algorithms are applied to yield a three-dimensional surface model. After georeferencing this model, it can be used to create any kind of orthophoto out of the initial aerial views. To prove the benefits of this approach in comparison to the most common ways of georeferencing aerial imagery, several archaeological case studies are presented. Not only will they showcase the easy workflow and accuracy of the results, but they will also prove that this approach moves beyond current restrictions due to its applicability to datasets that were previously thought to be unsuited for convenient georeferencing.