DNS is one of the oldest distributed system that has stood the test of time and abuse. It is at the core of our internet since Adam and even helps out the new services like CDN. Because this post is about DNS, i will assume that you know how the resolution works in general.
In AWS, the Route53 service is the global DNS service thats highly available and scalable without the users sweating about it. Implementation-wise, every VPC/AZ has a +2 address at which a DNS resolver service is residing, and can also be accessed at 169.254.169.254. This is not a host running bind, but a service that helps to forward messages to a resolver service running at AZ level. This resolver service in turn will try to resolve the query depending on where it came from and where it has to go. The resolver service has a cache running to fasten the queries. There is a packet limit of 1024pps per ENI to the +2 address, this will impact if you have a DNS forwarder running within in EC2.
A private hosted zone is a container that holds information about how you want Amazon Route 53 to respond to DNS queries for a domain and its subdomains within one or more VPCs that you create with the Amazon VPC service. When yo use it: Suppose you have a database server that runs on an EC2 instance in the VPC that you associated with your private hosted zone. You create an A or AAAA record, such as db.example.com, and you specify the IP address of the database server. When an application submits a DNS query for db.example.com, Route 53 returns the corresponding IP address. To get an answer from a private hosted zone you also have to be running an EC2 instance in one of the associated VPCs (or have an inbound endpoint from a hybrid setup.) If you try to query a private hosted zone from outside the VPCs or your hybrid setup, the query will be recursively resolved on the internet.
Resolver destination types:
Private DNS (consumes the Private Hosted Zone associations), has priority over others.
VPC DNS (authority for ec2 private and public names, RFC 1918 addresses) this is the guy resolving the private ip from a public ip of an EC2
Public DNS (authority for S3 and other public services)

There is support for overlappings private hosted zones, so that two domains can be managed by different accounts.
Life before Resolver Endpoint: Set up a DNS forwarder in EC2 instance and change the DHCP in VPC to point the name servers to that forwarders. But all the queries will go to one resolver only even if there are multiple resolvers because thats how the linux servers will resolve. It will pick up the first name server from the resolv.conf list and hit it, so what happens if it fails. The next name server will be hit after some timeout only as configured in resolv.conf. The ENI packet limit will also become an issue as all VPC are trying to hit one ENI.

If we setup a hub and spoke model, then all VPCs DHCP will need to be changed to point to that forwarder.

In came Resolver Endpoints. Inbound Endpoints will create ENIs that are reachable over DC and VPN. EC2 instances will still use the +2 resolver instead of endpoints. Costing is for the ENI (minimal). For the outbound endpoints, you will have to configure rules also. Two types of rules (FORWARD and SYSTEM). Only one VPC can have the endpoint, others can share the rules without peering/TGW, but if the VPC are in different account, use RAM to share rules across accounts.

Resolver Evolution to support forwarding:


Resolver creates some Auto defined System rules that will prevent breakage. Ex: Theres one for “.” which means all queries go to SYSTEM, and there are some for VPC DNS (like RFC 1918 and EC2 names), and ones for PHZ, some for ptr for VPC IP lookups. So if you accidently create a rule for “.”, nothing will break. PrivateLink is implemented over PHZ.
AD will also need Outbound resolver rules
