Cakmak, BasakOngun, Cihan2026-04-042026-04-0420240890-06041469-1760https://doi.org/10.1017/S0890060424000106https://hdl.handle.net/11411/10491This paper aims to explore alternative representations of the physical architecture using its real-world sensory data through artificial neural networks (ANNs). In the project developed for this research, a detailed 3-D point cloud model is produced by scanning a physical structure with LiDAR. Then, point cloud data and mesh models are divided into parts according to architectural references and part-whole relationships with various techniques to create datasets. A deep learning model is trained using these datasets, and new 3-D models produced by deep generative models are examined. These new 3-D models, which are embodied in different representations, such as point clouds, mesh models, and bounding boxes, are used as a design vocabulary, and combinatorial formations are generated from them.eninfo:eu-repo/semantics/openAccessDeep Learning3-D Deep Generative ModelsPoint CloudComputational DesignRepresentations in design computing through 3-D deep generative modelsArticle2-s2.0-8521203667310.1017/S089006042400010610.1017/S0890060424000106Q238Q2WOS:001372824800001