Does smash or pass AI support diverse beauty standards?

The mainstream smash or pass AI system has a support rate of less than 28% for aesthetic diversity (MIT Media Lab 2024 Audit Report), and its technical architecture inherently reinforces specific aesthetic paradigms. In industry benchmark datasets such as CelebA, the proportion of faces that conform to Western European centrism features reached 89.3% (narrow nose wings index >0.7, and light skin color samples accounted for 82%), directly resulting in a “pass” probability of 63.4% for the algorithm when evaluating the wide cheekbones feature of East Asia, which is much higher than the 23.1% of the Caucasian race. User tests in sub-Saharan Africa have shown that the probability of traditional tribal facial patterns being judged as skin blemishes by the system exceeds 75%, and there is a huge compatibility gap in aesthetic parameters.

The biometric distribution shift of the training data constitutes the core defect. The current commercial platform relies on the VGGFace2 dataset, which contains over 3.3 million images with 9,131 identities. However, the age group over 65 years old accounts for only 4.7% of the total, while the actual population proportion exceeds 17%. When the input of photos with a wrinkle density (>12 per square centimeter) higher than the number of bits in the dataset (3.2 per square centimeter), the model’s confidence level dropped sharply by 42 percentage points. A controlled experiment conducted by the Indian Institute of Technology Bombay confirmed that when 1,500 South Asian faces were input into the smash or pass ai system developed in the United States, the scoring error based on nasal root height deviation (average lower than 1.7mm) reached ±26.8 points of the original value.

The commercial operation mechanism intensifies the monopoly of standards. 72% of the annual revenue of leading platforms comes from cooperation with beauty brands, and they need to proactively cater to the aesthetic patterns of their financial backers. In a $5 million cooperation agreement signed by a certain South Korean skincare giant in 2023, it is mandatory that the probability of algorithms “smash” faces that match the characteristics of its products (V-shaped face Angle <15°) must not be less than 85%. This economic interest binding leads to the distortion of technical parameters when the algorithm is updated – the L’Oreal collaborative model adjusted the cheek fullness threshold from the normal value of 0.45 to 0.38 (out of 100), artificially creating the perception of “defect” among East Asian users. Third-party evaluations show that the negative ratings for people with prominent apples of the cheeks in this version have increased by 17%.

image

Cross-cultural value conflicts are ignored at the technical level. In the traditional aesthetic of Benin in West Africa, facial scars are regarded as a symbol of honor (with a social value score of +1.3 for each scar), but 98% of the samples in the training set marked them as negative features. Tests by EFAI, a fairness assessment tool developed by the Northern European Union, show that when the facial print photos of the Maori Tamoko are input, the misjudgment rate of the mainstream smash or pass ai system reaches 94%. What is even more serious is the solidification of parameters: The system sets the distance between the two eyes to 41% of the face width as the “golden ratio”, which forms an irreconcilable numerical conflict with the traditional Polynesian aesthetic standard (36-38%).

Improvement attempts have been frequently hindered in terms of economic benefits. In 2023, South African developers open-sourced the AfroBeauty dataset, which included 150,000 African facial features, but only 12% of commercial platforms adopted this data. Technical attribution shows that after introducing multiple datasets, the model inference latency will increase from 450ms to 680ms, the cloud service cost will rise by 22%, and the gross profit margin will be directly reduced by 35%. The regulatory penalties are also insufficient: The EU’s fine for a certain platform’s racial discrimination algorithm only accounts for 0.03% of its annual revenue (about 95,000 euros), far lower than the minimum budget of 2.8 million US dollars required to implement the multi-support transformation. This has led to a lack of improvement momentum, with the average proportion of diversified investment in the industry consistently hovering around 1.3% of the R&D budget.

Despite fundamental limitations, the innovative case of the South African digital art team Sangoma demonstrates possibilities. Its self-developed smash or pass ai system integrates the aesthetic standards of 37 indigenous tribes across 6 continents, achieving the function of dynamic aesthetic switching. When the user selects the “Maasai Tribe Mode”, the algorithm increases the threshold of the bridge of the nose width to 189% of the European standard, and raises the weight of the earlobe stretching length parameter to 0.78 (benchmark value 0.05). Although the technology has raised the server cost to 320% of the industry average, the conversion rate of paying users within three months of going online has increased to 21% (the industry benchmark is 6.4%), proving that the diversified model is commercially sustainable. This practice provides a feasible path for the coexistence of algorithmic ethics and technical efficiency.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top