For any application to understand what it allows its users to do, we must rely on app functionality descriptions provided by software developers on app pages in marketplaces and in the release notes, the developer view or claimed features. User reviews and public discussions on thematic forums can serve as another source of information about app's features, and sometimes new features are inspired by such user view. However, little research has been done on app artifact analysis to distill actual high-level features, with researchers focusing on bytecode analysis to understand low-level app behaviors, such as API calls, without necessarily mapping those to features. Herein, we explore the possibilities of LLMs to reconstruct the app features and functionality descriptions from the (middle-level) app artifact information to bridge the perspective and knowledge gaps. We extract diverse unstructured text strings from 235 macOS app artifacts obtained from the Setapp app store and prompt the GPT-4o LLM for a list of possible feature descriptions, which we later compare with the human-written app's feature list in the app store. We observe minor differences in lexical structure in terms of part-of-speech counts, and the semantic similarity (cosine) score varies between 0.47–0.76 with GloVe embeddings and between 0.57–0.77 with BERT ones, meaning that even naive prompting can produce similar enough app feature descriptions w.r.t. the human-produced oracle. Our results show the potential of the LLM use for automatic/assisted app feature description generation in marketplaces and for contrasting the claimed and actual app behavior for detecting any discrepancies.