Although critical for habitat and species conservation, camera trap image analysis is often manual, time-consuming, and expensive. Thus, automating this process would allow large-scale research on biodiversity hotspots of large conspicuous mammals and bird species. This paper explores the use of deep learning species-level object detection and classification models for this task, using two state-of-The-Art architectures, YOLOv5 and Faster R-CNN, for two species: white-lipped peccary and collared peccary. The dataset contains 7,733 images obtained after data augmentation from the Tiputini Biodiversity Station. The models were trained in 70% of the dataset, validated in 20%, and tested in 10% of the available data. The YOLOv5 model proved to be more robust, having lower losses and a higher overall mAP (Mean Average Precision) value than Faster-RCNN. This is one of the first steps towards developing an automated camera trap analysis tool, allowing a large-scale analysis of population and habitat trends to benefit their conservation. The results suggest that hyperparameter fine-Tuning would improve our models and allow us to extend this tool to other native species.