CloudifierNet – Deep Vision Models for Artificial Image Processing

With the advancement of Artificial Intelligence and Deep Learning, with its multitude of applications in particular, a new area of research is emerging – that of automated systems development and maintenance. With this particular area of application of Artificial Intelligence, a broad range of computer users – from IT maintenance personnel to software developers – will be enabled to use automated inference and prediction tools. These automation tools will be in future available for a multitude of tasks such as general purpose automated maintenance of custom applications and operating system issues. Our vision is to research and develop truly intelligent systems able to analyze user interfaces from various sources and generate real and usable inferences ranging from architecture analysis to actual code generation. One key element of such systems is that of artificial scene detection and analysis based on deep learning computer vision systems. Computer vision models and particularly deep directed acyclic graphs based on convolutional modules are generally constructed and trained based on natural images datasets. Due to this fact, the models will develop during the training process natural image feature detectors with the exception of the base graph modules that will learn basic primitive features. In the current paper (LINK) we present the base principles of a deep neural pipeline for computer vision applied to artificial scenes (scenes generated by user interfaces or similar). Following a research and development process our team derived a set of conclusions based on experiments and bench-marking against state-of-the-art transfer-learning implemented deep vision models.