In machine learning and AI, the black box models are developed from input data by algorithms (e.g., Deep learning algorithms). Although the input variables are known, the functions' complexity and the joint relationship between variables make it challenging for data scientists and ML developers to interpret the processes inside the box and explain the ultimate decision. The lack of interpretability makes it challenging to trust the black box models and creates barriers to adopting ML and AI in numerous domains. The short answer to the current question, "How to overcome 'Black Box' Model Challenges?" is explainable AI. Designing the AI and ML models by adding explainable techniques improves understanding, increases trust and transparency, and avoids possible biases and discriminations due to data quality issues.