Get In Touch
- Why Abzooba
Introduction: Healthcare payers have long operated using only retrospective claims data for making business and care decisions.
Introduction: This Project provides a scalable spark-based mechanism to efficiently read Dicom Images in Spark-Sql Dataframe.
Introduction: Abzooba being an analytics company, we use spark extensively for machine learning,data ingestion,ETL & large data processing. Spark enables In-memory processing of large-scale data.Spark job can be long running/short-lived/scheduled as per need. Memory requirements also differ to run these kind of jobs.
Introduction: Through this article, I would like to familiarize the readers with some of the basic concepts of Data Lake and also take them through the journey of various flavors of data lake implementations across the industry. I will also deep dive into Data Virtualization concepts and show how a judicious mix of virtualization with the data lake components helps us to get the required agility.
Introduction: We need to create a minimum of three Apache ZooKeeper nodes and three nodes for Apache NiFi. If you want to increment the apache NiFi node then you can do it, but to achieve the failover in case one apache NiFi primary node goes down, then in this case with the help of ZooKeeper failover controller suggest or set the other node as a primary in apache NiFi cluster.