<!DOCTYPE article
PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20190208//EN"
       "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.4" xml:lang="en">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">Automation and modeling in design and management</journal-id>
   <journal-title-group>
    <journal-title xml:lang="en">Automation and modeling in design and management</journal-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Автоматизация и моделирование в проектировании и управлении</trans-title>
    </trans-title-group>
   </journal-title-group>
   <issn publication-format="print">2658-3488</issn>
   <issn publication-format="online">2658-6436</issn>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="publisher-id">55943</article-id>
   <article-id pub-id-type="doi">10.30987/2658-6436-2022-4-18-28</article-id>
   <article-categories>
    <subj-group subj-group-type="toc-heading" xml:lang="ru">
     <subject>Автоматизация и управление технологическими процессами и производствами, системы автоматизации проектирования</subject>
    </subj-group>
    <subj-group subj-group-type="toc-heading" xml:lang="en">
     <subject>Automation and control of technological processes and production, automated design systems</subject>
    </subj-group>
    <subj-group>
     <subject>Автоматизация и управление технологическими процессами и производствами, системы автоматизации проектирования</subject>
    </subj-group>
   </article-categories>
   <title-group>
    <article-title xml:lang="en">Forming synthetic data for training a computer vision system</article-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Формирование синтетических данных для обучения системы компьютерного зрения</trans-title>
    </trans-title-group>
   </title-group>
   <contrib-group content-type="authors">
    <contrib contrib-type="author">
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Копылов</surname>
       <given-names>Денис Александрович</given-names>
      </name>
      <name xml:lang="en">
       <surname>Kopylov</surname>
       <given-names>Denis Alexandrovich</given-names>
      </name>
     </name-alternatives>
     <xref ref-type="aff" rid="aff-1"/>
    </contrib>
    <contrib contrib-type="author">
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Агешин</surname>
       <given-names>Егор Сергеевич</given-names>
      </name>
      <name xml:lang="en">
       <surname>Ageshin</surname>
       <given-names>Yegor Sergeevich</given-names>
      </name>
     </name-alternatives>
     <xref ref-type="aff" rid="aff-2"/>
    </contrib>
    <contrib contrib-type="author">
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Хомутская</surname>
       <given-names>Ольга Владиславовна</given-names>
      </name>
      <name xml:lang="en">
       <surname>Khomutskaya</surname>
       <given-names>Olga Vladislavovna</given-names>
      </name>
     </name-alternatives>
     <xref ref-type="aff" rid="aff-3"/>
    </contrib>
   </contrib-group>
   <aff-alternatives id="aff-1">
    <aff>
     <institution xml:lang="ru">Московский Авиационный институт</institution>
     <city>Москва</city>
     <country>Россия</country>
    </aff>
    <aff>
     <institution xml:lang="en">Moscow Aviation  Institute</institution>
     <city>Москва</city>
     <country>Russian Federation</country>
    </aff>
   </aff-alternatives>
   <aff-alternatives id="aff-2">
    <aff>
     <institution xml:lang="ru">Московский  Авиационный институт</institution>
     <city>Москва</city>
     <country>Россия</country>
    </aff>
    <aff>
     <institution xml:lang="en">Moscow Aviation Institute</institution>
     <city>Москва</city>
     <country>Russian Federation</country>
    </aff>
   </aff-alternatives>
   <aff-alternatives id="aff-3">
    <aff>
     <institution xml:lang="ru">Московский  Авиационный институт</institution>
     <country>Россия</country>
    </aff>
    <aff>
     <institution xml:lang="en">Moscow Aviation Institute</institution>
     <country>Russian Federation</country>
    </aff>
   </aff-alternatives>
   <pub-date publication-format="print" date-type="pub" iso-8601-date="2022-12-30T00:02:13+03:00">
    <day>30</day>
    <month>12</month>
    <year>2022</year>
   </pub-date>
   <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2022-12-30T00:02:13+03:00">
    <day>30</day>
    <month>12</month>
    <year>2022</year>
   </pub-date>
   <volume>2022</volume>
   <issue>4</issue>
   <fpage>18</fpage>
   <lpage>28</lpage>
   <history>
    <date date-type="received" iso-8601-date="2022-06-21T00:00:00+03:00">
     <day>21</day>
     <month>06</month>
     <year>2022</year>
    </date>
    <date date-type="accepted" iso-8601-date="2022-08-02T00:00:00+03:00">
     <day>02</day>
     <month>08</month>
     <year>2022</year>
    </date>
   </history>
   <self-uri xlink:href="https://bstu.editorum.ru/en/nauka/article/55943/view">https://bstu.editorum.ru/en/nauka/article/55943/view</self-uri>
   <abstract xml:lang="ru">
    <p>Приведен метод формирования синтетических данных для обучения нейронной сети (далее – нейросеть) распознавать существующие объекты. Данный метод призван упростить процесс составления начального набора данных и его изменения для дальнейшего использования в компьютерном зрении. В качестве образца объекта для распознавания используется напечатанный с помощью аддитивных технологий редуктор авиационного двигателя. Трехмерные модели загружались в трехмерный редактора Houdini, где с помощью подпрограммы (далее – скрипт) на Python сохранялась коллекция скриншотов детали на разном фоне. Полученный набор данных использовался для обучения трех нейронных сетей на сайте Roboflow, а полученные результаты анализировались для возможности дальнейшего использования данного метода. В статье подробно показан процесс создания скриншотов и результат распознавания напечатанной детали с помощью трех нейронных сетей</p>
   </abstract>
   <trans-abstract xml:lang="en">
    <p>The article presents a method for generating synthetic data for training a neural network (hereinafter referred to as a neural network) to recognize existing objects. This method is designed to simplify the process of compiling the initial data set and modifying it for further application in the computer vision. An aircraft engine gearbox printed using additive technologies is used as a sample object for recognition. Three-dimensional models are loaded into Houdini three-dimensional editor, where a screenshot collection of the part on different backgrounds is saved using a sub-programme (hereinafter referred to as script) in Python. The received data set is applied to train three neural networks on the Roboflow website, and the results obtained are analysed for the possibility of using this method further. The article shows in detail the process of creating screenshots and the result of recognizing a printed part using three neural networks</p>
   </trans-abstract>
   <kwd-group xml:lang="ru">
    <kwd>распознавание объекта</kwd>
    <kwd>компьютерное зрение</kwd>
    <kwd>двигателестроение</kwd>
    <kwd>трехмерный редактор</kwd>
    <kwd>Houdini</kwd>
    <kwd>Python</kwd>
    <kwd>нейронные сети</kwd>
    <kwd>машиностроение</kwd>
    <kwd>производство</kwd>
   </kwd-group>
   <kwd-group xml:lang="en">
    <kwd>object  recognition</kwd>
    <kwd>computer vision</kwd>
    <kwd>engine building</kwd>
    <kwd>3D editor</kwd>
    <kwd>Houdini</kwd>
    <kwd>Python</kwd>
    <kwd>neural networks</kwd>
    <kwd>mechanical engineering</kwd>
    <kwd>manufacturing</kwd>
   </kwd-group>
  </article-meta>
 </front>
 <body>
  <p></p>
 </body>
 <back>
  <ref-list>
   <ref id="B1">
    <label>1.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Richard J. Chen, Ming Y. Lu, Tiffany Y. Chen, Drew F. K. Williamson, Faisal Mahmood, «Synthetic data in machine learning for medicine and healthcare» Nature Biomedical Engineering (5), 2021. p. 493-497.</mixed-citation>
     <mixed-citation xml:lang="en">Richard J. Chen, Ming Y. Lu, Tiffany Y. Chen, Drew F. K. Williamson, Faisal Mahmood, «Synthetic data in machine learning for medicine and healthcare» Nature Biomedical Engineering (5), 2021. p. 493-497.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B2">
    <label>2.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">В. Кондаратцев, А. Крючков, Р. Чумак, «Машинное зрение в промышленной дефектоскопии» PHYGI-TALISM. Москва. 2020.</mixed-citation>
     <mixed-citation xml:lang="en">V. Kondaratcev, A. Kryuchkov, R. Chumak, «Mashinnoe zrenie v promyshlennoy defektoskopii» PHYGI-TALISM. Moskva. 2020.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B3">
    <label>3.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Aleksei Boikov, Vladimir Payor, Roman Savelev, Alexandr Kolesnikov, «Synthetic Data Generation for Steel Defect Detection and Classification Using Deep Learning» Symmetry. 2021. № 13.</mixed-citation>
     <mixed-citation xml:lang="en">Aleksei Boikov, Vladimir Payor, Roman Savelev, Alexandr Kolesnikov, «Synthetic Data Generation for Steel Defect Detection and Classification Using Deep Learning» Symmetry. 2021. № 13.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B4">
    <label>4.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Erfanian Ebadi, Salehe; Jhang, You-Cyuan; Zook, Alex; Dhakad, Saurav; Crespi, Adam; Parisi, Pete; Borkman, Steven; Hogins, Jonathan; Ganguly, Sujoy, «PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision» Unity technologies, 2021.</mixed-citation>
     <mixed-citation xml:lang="en">Erfanian Ebadi, Salehe; Jhang, You-Cyuan; Zook, Alex; Dhakad, Saurav; Crespi, Adam; Parisi, Pete; Borkman, Steven; Hogins, Jonathan; Ganguly, Sujoy, «PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision» Unity technologies, 2021.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B5">
    <label>5.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Matteo Fabbri, Guillem Brasó, «MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking?» Modena. 2021.</mixed-citation>
     <mixed-citation xml:lang="en">Matteo Fabbri, Guillem Brasó, «MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking?» Modena. 2021.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B6">
    <label>6.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Damien Trentesaux, Theodor Borangiu, Paulo Leitão, Jose-Fernando Jimenez, Jairo R. Montoya-Torres, « SOHOMA: International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing,» в Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future. Bogota. Colombia. 2021.</mixed-citation>
     <mixed-citation xml:lang="en">Damien Trentesaux, Theodor Borangiu, Paulo Leitão, Jose-Fernando Jimenez, Jairo R. Montoya-Torres, « SOHOMA: International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing,» v Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future. Bogota. Colombia. 2021.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B7">
    <label>7.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Pablo Martinez-Gonzalez, Sergiu Oprea, John Alejandro Castro-Vargas, Alberto Garcia-Garcia, Sergio Orts-Escolano, Jose Garcia-Rodriguez, Markus Vincze, «International Joint Conference on Neural Networks (IJCNN),» в UnrealROX+: An Improved Tool for Acquiring Synthetic Data from Virtual 3D Environments. Shenzhen. China. 2021.</mixed-citation>
     <mixed-citation xml:lang="en">Pablo Martinez-Gonzalez, Sergiu Oprea, John Alejandro Castro-Vargas, Alberto Garcia-Garcia, Sergio Orts-Escolano, Jose Garcia-Rodriguez, Markus Vincze, «International Joint Conference on Neural Networks (IJCNN),» v UnrealROX+: An Improved Tool for Acquiring Synthetic Data from Virtual 3D Environments. Shenzhen. China. 2021.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B8">
    <label>8.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">«Houdini help» [В Интернете]. Available: https://www.sidefx.com/docs/houdini/index.html.</mixed-citation>
     <mixed-citation xml:lang="en">«Houdini help» [V Internete]. Available: https://www.sidefx.com/docs/houdini/index.html.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B9">
    <label>9.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">D. Rutland, «Orenda 9/10 jet engine power take off gearbox». 2021. [В Интернете]. Available: grabcad.com.</mixed-citation>
     <mixed-citation xml:lang="en">D. Rutland, «Orenda 9/10 jet engine power take off gearbox». 2021. [V Internete]. Available: grabcad.com.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B10">
    <label>10.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">S. Gutta, «Object Detection Algorithm - YOLO v5 Architecture. Object Detection Algorithm - YOLO v5 Architecture» [В Интернете]. Available: https://medium. com/analytics-vidhya/object-detection-algorithm-yolo-v5-architecture-89e0a35472ef.</mixed-citation>
     <mixed-citation xml:lang="en">S. Gutta, «Object Detection Algorithm - YOLO v5 Architecture. Object Detection Algorithm - YOLO v5 Architecture» [V Internete]. Available: https://medium. com/analytics-vidhya/object-detection-algorithm-yolo-v5-architecture-89e0a35472ef.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B11">
    <label>11.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">A.A.f. Python, «How RetinaNet works» [В Интер-нете]. Available: https://developers.arcgis. com/python/ guide/how-retinanet-works/.</mixed-citation>
     <mixed-citation xml:lang="en">A.A.f. Python, «How RetinaNet works» [V Inter-nete]. Available: https://developers.arcgis. com/python/ guide/how-retinanet-works/.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B12">
    <label>12.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">S. Ananth, «Faster R-CNN for object detection» [В Интернете]. Available: https://towardsdatascience.com / faster - r - cnn - for - object - detection - a - technical-summary-474c5b857b46.</mixed-citation>
     <mixed-citation xml:lang="en">S. Ananth, «Faster R-CNN for object detection» [V Internete]. Available: https://towardsdatascience.com / faster - r - cnn - for - object - detection - a - technical-summary-474c5b857b46.</mixed-citation>
    </citation-alternatives>
   </ref>
  </ref-list>
 </back>
</article>
