为解决现有施工场地中,部署轻量化模型在资源受限的无GPU嵌入式平台面对密度大、目标小、环境复杂时检测安全帽精度低的问题,以YOLOv7-Tiny为基础,提出一种基于改进YOLOv7-Tiny的安全帽检测算法。通过对主干以及对SPPCSPC的改进,并且在颈部网络的不同位置加入SimAM注意力机制,同时采用WIoUv3为损失函数来改进头部网络。在SHWD安全帽数据集的实验结果表明,与原YOLOv7-Tiny模型相比,该模型的mAP提高了1.32%,证明该模型精确度相较于原模型有较大提升,能有效提高复杂施工场景下的检测精度。In order to solve the problem of low accuracy in detecting safety helmets on resource limited GPU free embedded platforms with high density, small targets, and complex environments in existing construction sites, a safety helmet detection algorithm based on improved YOLOv7-Tiny is proposed. By improving the backbone and SPPCSPC, and adding SimAM attention mechanism at different positions in the neck network, WIoUv3 is used as the loss function to improve the head network. The experimental results on the SHWD safety helmet dataset show that compared with the original YOLOv7-Tiny model, the mAP of this model has increased by 1.32%, proving that the accuracy of this model has been greatly improved compared to the original model and can effectively improve the detection accuracy in complex construction scenarios.