CAIP Udemy Set 5 Quiz

1. Which of the following algorithms is most suitable for a binary classification problem?





2. In the context of evaluating a regression model, what does a low R-squared value indicate?





3. Which strategy is essential to address the risk of discrimination in models that are used in loan approval processes?





4. Which of the following is a critical factor in ensuring the ethical use of AI in public health initiatives?





5. What is the primary risk of deploying a machine learning model without adequate consideration of fairness?





6. What role does an API gateway play in the deployment of machine learning models?





7. In the context of AI-driven decision-making in the public sector, what is a primary accountability concern, and how can it be addressed?





8. What is the primary ethical risk of using AI for predictive analytics in law enforcement?





9. Why is it important to conduct a fairness assessment when developing AI models for hiring decisions?





10. Which of the following is a key consideration when engineering features for a time series model?





11. What is the role of dropout in neural networks, and how does it affect model performance?





12. Which of the following is an advantage of using a Convolutional Neural Network (CNN) for image recognition?





13. Which of the following is a primary ethical concern when developing machine learning models for predictive policing?





14. Which of the following best describes the trade-off in selecting the learning rate for a gradient descent algorithm?





15. How can companies prevent AI models from unintentionally perpetuating bias in product recommendations?





16. A healthcare provider wants to use AI to predict patient readmission rates. Which data preprocessing step is crucial?





17. Which of the following techniques can be used to prevent overfitting?





18. Which of the following is a key consideration when deploying ML models in edge computing environments?





19. Which strategy is most effective for handling concept drift in a deployed ML model?





20. What is the potential impact of inconsistent data preprocessing on ML models?





21. Which method is commonly used to test the robustness of a deployed ML model?





22. In training a deep learning model, what is the purpose of a learning rate scheduler?





23. Which approach is recommended for securely managing and rotating API keys in a ML pipeline?





24. How can addressing business risks in feature engineering improve overall ML project success?





25. Which of the following is a primary concern when deploying ML models in highly regulated industries?





26. What is the purpose of a confusion matrix in evaluating classification models?





27. What is the most appropriate way to address the ethical implications of using AI in public sector applications?





28. Which is a common challenge when deploying ML models on edge devices?





29. What is a common practice to avoid data leakage during model validation?





30. Why might one choose to apply log transformation to a numerical feature?





31. Which practice is essential for protecting the confidentiality of sensitive data during model training?





32. Which method commonly handles imbalanced datasets in machine learning?





33. Which of the following is an ethical risk when using AI to predict employee performance?





34. Which is a common challenge when deploying ML models in production environments?





35. Why is it important to evaluate a ML model on a separate test set after validation?





36. Which challenge is most associated with deploying ML models in a multi-cloud environment?





37. What is the purpose of using cross-validation in model evaluation?





38. Which method is most effective for securing data in transit in a ML pipeline?





39. Which type of cross-validation is particularly useful for imbalanced datasets?





40. What is a typical challenge when working with video data compared to other data types?





41. How to best describe scalability in AI to a stakeholder?





42. When might a logit transformation be particularly useful in ML?





43. What is the primary risk of automating the retraining process of a ML model in production?





44. How to convey the importance of data quality to a stakeholder interested in model performance?





45. Why is tokenization essential in processing textual data for ML?





46. Which practice ensures ML models comply with data privacy regulations?





47. In e-commerce, which business case strongly supports deploying a recommendation system?





48. How does feature engineering affect the impact of data size on a ML model?





49. Which regularization technique penalizes the sum of the absolute values of model parameters?





50. Which tool is commonly used for deploying ML models in Kubernetes environments?





51. Which issue arises from training a model on a large but low-quality dataset?





52. What is the primary benefit of using a validation set during model training?





53. Which method helps evaluate the long-term stability of a ML model in production?





54. In an online streaming service, which business case strongly supports implementing a recommendation system?





55. What is a potential issue with using ordinal encoding for nominal categorical data?





56. Why might increasing data size alone not solve model underfitting issues?





57. Which best describes the ethical concern of “automation bias” in AI systems?





58. Which is the most appropriate method for handling missing data before training an ML model?





59. Which metric should be used to evaluate a classifier optimized for detecting the positive class?





60. What is a potential ethical issue in using AI for personalized marketing?





61. Which is NOT a common method to prevent overfitting in ML models?





62. In which scenario is a logit transformation most useful?





63. Which method effectively reduces latency in real-time ML inference?





64. Why is it important to monitor both precision and recall in model evaluation?





65. What is a primary ethical issue related to deploying facial recognition technology?





66. How can ethical concerns arise during feature engineering?





67. How to explain the importance of ethical considerations in AI to a stakeholder worried about bias?





68. Which challenge is likely when deploying AI in public interest research, and how to mitigate it?





69. What is the key advantage of normalizing data for gradient-based optimization?





70. How does feature engineering benefit from working with numerical data vs. textual data?





71. In AI, what does “overfitting” refer to?





72. A public sector organization automates document processing with AI. What is a key ethical concern, and how to address it?





73. Why is cross-validation important in feature engineering?





74. When operationalizing an ML model, how to ensure it aligns with business objectives while minimizing ethical risks?





75. In a recommendation system for e-commerce, which approach is most likely to succeed?





76. Which security measure is crucial for protecting the ML pipeline from supply chain attacks?





77. What is the primary function of model orchestration tools in ML deployment?





78. Why keep validation and test sets separate when evaluating a model?





79. How to explain model generalization to a non-expert stakeholder?





80. Which sign indicates your ML model is overfitting the training data?