Our work draws attention to digital ageism referring to the nexus of ageism (discrimination or bias related to age) that is mediated and perpetuated by artificial intelligent (AI) systems and technologies. Building on the World Health Organization's recently published policy brief entitled "Ageism in AI for Health"and our previous work about digital ageism, this paper aims to advance our current understanding and conceptualization of digital ageism in technology and AI systems broadly and beyond health alone. To do this, we will 1) elaborate on our conceptual model and the ageist technology-mediated cycles of injustice that can produce and reinforce digital ageism; 2) present empirical evidence of our descriptive analysis of seven commonly used facial image datasets to highlight data disparities for older adults which will provide real-world evidence that substantiates one of the elements in our ageist cycles of injustice; and 3) summarize results from our grey literature search of various grey literature databases including the AI ethics guidelines Global Inventory to identify guidance documents that address ageism in AI in research or technology development. This paper uniquely contributes conceptual and empirical evidence of digital ageism which will advance knowledge in the field and deepen our understanding of how ageism in AI is fostered by broader ageist cycles of injustice. Lastly, we will briefly provide future considerations to address digital ageism.