Complete collection of technical, behavioral, and coding questions asked in Google interviews. Master these to crack your Google interview.
A process is an independent program in execution with its own memory space. A thread is a lightweight unit of execution within a process that shares the same memory. Processes are isolated and more resource-intensive, while threads allow parallel execution within a single process. Context switching between threads is faster than between processes.
HashMap uses an array of buckets with linked lists (or trees in Java 8+). Keys are hashed using hashCode(), and the index is calculated using hash % array_length. Collisions are handled via chaining. When load factor exceeds 0.75, the array is resized. Java 8 converts linked lists to red-black trees when bucket size exceeds 8 for O(log n) lookup.
CAP theorem states that a distributed system can only guarantee two of three properties: Consistency (all nodes see the same data), Availability (every request receives a response), and Partition Tolerance (system continues despite network failures). In practice, partition tolerance is required, so systems choose between CP (like HBase) or AP (like Cassandra).
SQL databases are relational, use structured schemas, support ACID transactions, and use SQL for queries (MySQL, PostgreSQL). NoSQL databases are non-relational, schema-flexible, optimized for specific use cases: document stores (MongoDB), key-value (Redis), column-family (Cassandra), graph (Neo4j). NoSQL scales horizontally better but may sacrifice consistency.
A distributed system is a collection of autonomous computers that appear as a single coherent system. Challenges include: network failures, latency, partial failures, consistency, ordering of events, consensus, and debugging complexity. Key concepts: replication, partitioning, consensus protocols (Paxos, Raft), vector clocks.
Load balancing distributes incoming requests across multiple servers to prevent overload. Types: Round-robin (sequential distribution), Least connections (send to least busy), IP hash (consistent routing), Weighted (based on server capacity). Google uses advanced load balancing with Maglev and global anycast for its services.
Microservices architecture breaks applications into small, independent services. Advantages: independent deployment, technology diversity, scalability, fault isolation. Disadvantages: network latency, distributed system complexity, data consistency challenges, operational overhead. Google uses microservices extensively with gRPC for inter-service communication.
Caching stores frequently accessed data for faster retrieval. Strategies: Write-through (write to cache and DB simultaneously), Write-back (write to cache first, async to DB), Write-around (write to DB only). Invalidation: TTL-based, event-based, LRU eviction. "There are only two hard things in CS: cache invalidation and naming things."
Eventual consistency guarantees that if no new updates are made, all replicas will eventually converge to the same value. Used when availability is prioritized over immediate consistency. Examples: social media feeds, shopping cart counts, DNS. Contrasts with strong consistency where reads always return the latest write.
MapReduce is a programming model for processing large datasets across distributed clusters. Map phase: transforms input into key-value pairs. Reduce phase: aggregates values by key. Used for: log analysis, indexing, data transformation, machine learning. Google invented it but now uses more advanced systems like Flume and Dataflow.
Pro Tip: Googleyness is as important as technical skills. Prepare 8-10 STAR stories covering collaboration, feedback, ambiguity, and user focus.
Use STAR method. Describe a project where requirements were unclear. Show how you: gathered available data, made reasonable assumptions, validated with stakeholders, iterated based on feedback. Emphasize comfort with ambiguity and proactive information gathering. Google values people who can make progress despite uncertainty.
Share a genuine example of constructive criticism. Explain your initial reaction, how you reflected on it, and specific changes you made. Show intellectual humility and growth mindset. Google values people who seek feedback and genuinely improve from it, not those who become defensive.
Describe the disagreement respectfully. Focus on: understanding their perspective first, presenting data-driven arguments, finding common ground, accepting the final decision gracefully. Show you can challenge ideas while maintaining relationships. Google wants healthy debate, not conflict avoidance or stubbornness.
Share a specific instance of mentoring, pair programming, or supporting a colleague. Describe the situation, your actions, and the positive outcome for them. Google values collaborative people who elevate their teams, not just individual contributors.
Choose an example showing learning agility. Explain your approach: resources used, questions asked, practice methods, timeline. Demonstrate curiosity and systematic learning. Google values continuous learners who can adapt to new technologies and domains quickly.
Share an example where you prioritized user experience. Could be debugging a customer issue, improving accessibility, or anticipating user needs. Show empathy and user-first thinking. Google's mission is user-focused: "Focus on the user and all else will follow."
Explain your framework: impact vs effort analysis, alignment with goals, stakeholder input, dependencies. Give a specific example. Show you can make tough tradeoffs and communicate them. Google moves fast and expects engineers to make good prioritization decisions independently.
Share an example where you advocated for the right technical or product decision. Explain how you: understood their need, identified concerns, proposed alternatives, reached agreement. Show you can be diplomatic but firm when something matters.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.cache = new Map();
}
get(key) {
if (!this.cache.has(key)) return -1;
// Move to end (most recently used)
const value = this.cache.get(key);
this.cache.delete(key);
this.cache.set(key, value);
return value;
}
put(key, value) {
if (this.cache.has(key)) {
this.cache.delete(key);
}
this.cache.set(key, value);
// Evict oldest if over capacity
if (this.cache.size > this.capacity) {
const oldestKey = this.cache.keys().next().value;
this.cache.delete(oldestKey);
}
}
}
// Time: O(1) for both get and put
// Space: O(capacity)function lengthOfLongestSubstring(s) {
const seen = new Map();
let maxLength = 0;
let start = 0;
for (let end = 0; end < s.length; end++) {
const char = s[end];
if (seen.has(char) && seen.get(char) >= start) {
start = seen.get(char) + 1;
}
seen.set(char, end);
maxLength = Math.max(maxLength, end - start + 1);
}
return maxLength;
}
// Example: "abcabcbb" β 3 ("abc")
// Time: O(n), Space: O(min(m, n)) where m is charset sizefunction mergeKLists(lists) {
if (!lists.length) return null;
// Min-heap approach using priority queue
const minHeap = new MinPriorityQueue({ priority: x => x.val });
// Add first node from each list
for (const list of lists) {
if (list) minHeap.enqueue(list);
}
const dummy = new ListNode(0);
let current = dummy;
while (!minHeap.isEmpty()) {
const node = minHeap.dequeue().element;
current.next = node;
current = current.next;
if (node.next) {
minHeap.enqueue(node.next);
}
}
return dummy.next;
}
// Time: O(N log K) where N is total nodes, K is number of lists
// Space: O(K) for the heapfunction ladderLength(beginWord, endWord, wordList) {
const wordSet = new Set(wordList);
if (!wordSet.has(endWord)) return 0;
const queue = [[beginWord, 1]];
const visited = new Set([beginWord]);
while (queue.length) {
const [word, level] = queue.shift();
if (word === endWord) return level;
// Try all single-character transformations
for (let i = 0; i < word.length; i++) {
for (let c = 97; c <= 122; c++) {
const newWord = word.slice(0, i) +
String.fromCharCode(c) +
word.slice(i + 1);
if (wordSet.has(newWord) && !visited.has(newWord)) {
visited.add(newWord);
queue.push([newWord, level + 1]);
}
}
}
}
return 0;
}
// BFS guarantees shortest path
// Time: O(MΒ² Γ N), Space: O(M Γ N)class RateLimiter {
constructor(windowSizeMs, maxRequests) {
this.windowSize = windowSizeMs;
this.maxRequests = maxRequests;
this.requests = new Map(); // userId -> [timestamps]
}
isAllowed(userId) {
const now = Date.now();
const windowStart = now - this.windowSize;
if (!this.requests.has(userId)) {
this.requests.set(userId, []);
}
const timestamps = this.requests.get(userId);
// Remove expired timestamps
while (timestamps.length && timestamps[0] <= windowStart) {
timestamps.shift();
}
if (timestamps.length >= this.maxRequests) {
return false;
}
timestamps.push(now);
return true;
}
}
// Usage: Allow 100 requests per minute
const limiter = new RateLimiter(60000, 100);function serialize(root) {
const result = [];
function dfs(node) {
if (!node) {
result.push('null');
return;
}
result.push(node.val.toString());
dfs(node.left);
dfs(node.right);
}
dfs(root);
return result.join(',');
}
function deserialize(data) {
const values = data.split(',');
let index = 0;
function dfs() {
if (values[index] === 'null') {
index++;
return null;
}
const node = new TreeNode(parseInt(values[index]));
index++;
node.left = dfs();
node.right = dfs();
return node;
}
return dfs();
}
// Preorder traversal ensures unique serialization
// Time: O(n), Space: O(n)