⚡️ About Merkle Science
We are known industry-wide for our predictive crypto risk & intelligence platform's best-in-class user experience and unmatched customizability. Our solutions provide next-generation crypto threat detection, risk management, and compliance for businesses, banks, and government agencies.
Our growing solution suite includes transaction screening and wallet monitoring, crypto crime investigation tools, enhanced due diligence and entity reporting, and crypto compliance and investigation training.
Backed by leading venture capital firms Darrow Holdings, Kraken Ventures, Uncorrelated Ventures, Digital Currency Group, Fenbushi Capital, Kenetic, Lunex Ventures and the Singapore Government-supported deep technology fund, SGInnovate, we enable businesses to scale and mature so that a full range of individuals, entities, and services may transact with crypto safely.
🌟 Team and Role
Merkle Science envisions a world powered by crypto. We are creating the infrastructure necessary to ensure the safe and healthy growth of this market, now at a $2 Trillion market cap. We are trailblazers and disruptors who are pushing the boundaries of innovation — to scale and mature so that a full range of individuals and corporations can transact with crypto safely. We are a global company with offices in Singapore, London, Bangalore, and New York.
💥 What will you do?
Create and maintain optimal data pipeline architecture for our workloads. This includes building highly resilient architecture for both streaming and batch ETL processes.
Have a good understanding of data structures used by public blockchains to store data. A significant portion of our data pipelines parse blockchain data and store them in our data warehouses
Assemble large, complex data sets that meet functional / non-functional business requirements. This includes expanding the scope of our data-mining efforts by building data pipelines to crawl data from the darkweb, openweb, third party data sources.
Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization
Works closely with a team of frontend and backend engineers, product managers, and analysts to render data on to our products.
Implement algorithms to transform raw data into useful information
Build, Manage and Deploy AI / ML workflows
🙋 What are we looking for?
>5+ years of relevant experience as Big Data Engineer.
Experience in building data pipelines(ETL/ELT) using open-source tools such as Apache Airflow, Apache Beam and Spark.
Experience in building realtime streaming pipelines using Kafka/Pubsub.
Experience in building and maintaining OLAP and OLTP data warehouses.
Good understanding of python, bash scripting and basic cloud platform skills (on GCP or AWS).
A working knowledge of Docker is a plus.
Analytical mind with a business acumen
Excellent communication skills
👀 What process do we follow? (<2 weeks)
Application: We will keep it simple. You can apply directly through our job portal. All we ask for is a Resume. Additional Portfolio links such as Github, Medium or a Personal website are welcome
Screening: We will screen your profile and get back with a decision within a week.
Interviews: We will have two rounds of interviews. Round one (30mins) will focus on getting to know each other better and identifying if this could work for both of us. Round two (60mins) is a technical round where we will review your prior experience and discuss how you would build frontend systems to solve a problem we will introduce on call.
Meet the Team: Culture-Fit is essential for both you and us. So we always go the extra mile, and you will meet two other colleagues on the team who you would be working with. Here, you could discuss questions on stack, culture and some other things you might be interested in if you had a consideration for a new role.
Offer Rollout: If all looks well, we will open a bottle of champagne.
📢 Other information you may want to consider
We will be flexible for the rest of the pandemic and work remotely; however, we are not a remote-first company, and the work location would be Bangalore when things settle.
❤️ Well Being, Compensation and Benefits
We care about your well-being. Along with excellent health insurance, we offer flexible time off, numerous Learning & development initiatives where Merkle Science invests in employee development, and working hours that we have not heard a single complaint against. We respect your weekends too :) We regularly host team-building sessions and encourage discussions around mental well-being as well.
On the compensation front, We admire the talent and believe in rewarding people for their inputs. Compensation is best in class, along with generous equity, and the whole process will be transparent from the very minute we speak to you.