Skip to main content
2024-04-1518 min read
Software Engineering

The Python Features You're Not Using (But Should Be)

Advanced Features That Will Make Your Code Reviews More Annoying

💡
TL;DR - Python Features You're Probably Not Using (But Should)
Walrus Operator (:=): Assign and use values in one expression—perfect for while loops and comprehensions
Pattern Matching: Structural pattern matching that makes complex conditionals readable and powerful
functools Magic: lru_cache for memoization, singledispatch for function overloading
Context Managers: Create your own with contextlib for cleaner resource management
Type System: Beyond basic hints—Protocol, TypedDict, Literal for better type safety
Descriptors: The secret sauce behind @property, enabling computed attributes
Data Classes: Post-init processing, field factories, and frozen instances
pathlib: Object-oriented file paths that make os.path feel prehistoric
Python has been around for over 30 years, and its standard library is massive. Yet most of us use maybe 20% of what's available. We stick to the same patterns, the same modules, the same approaches we learned when we first picked up the language.
But Python's standard library is full of gems that can make your code cleaner, faster, and more expressive. These aren't experimental features or third-party packages—they're built-in tools that have been battle-tested for years.
Here are 12 Python features that are criminally underused. Each one solves real problems, and once you know about them, you'll start seeing opportunities to use them everywhere.

The Walrus Operator: Assignment Expressions That Save the Day

The walrus operator (:=) was introduced in Python 3.8. It allows you to assign values inside expressions. The name comes from the visual similarity of := to a walrus lying on its side.
The primary benefit is eliminating repeated function calls and making certain patterns more concise, particularly in while loops and list comprehensions.

Before: The Repetitive Pattern

python
1# Reading file chunks - the old way
2chunk = file.read(8192)
3while chunk:
4 process(chunk)
5 chunk = file.read(8192) # Repeated call
6
7# List comprehension with wasteful computation
8results = []
9for item in data:
10 value = expensive_function(item)
11 if value > threshold:
12 results.append(value)

After: Elegant and DRY

python
1# Reading file chunks - with walrus
2while chunk := file.read(8192):
3 process(chunk)
4
5# List comprehension that computes once
6results = [value for item in data
7 if (value := expensive_function(item)) > threshold]
8
9# Regex matching in conditionals
10if match := pattern.search(text):
11 print(f"Found: {match.group()}")

With Walrus

Yes

No

Read & Check

chunk := read?

Process

Done

Without Walrus

Yes

No

Read chunk

chunk?

Process

Done

Real-World Example: API Pagination

The walrus operator is particularly useful for pagination patterns:
python
1# Fetching paginated API results
2def fetch_all_users(api_client):
3 users = []
4 page = 1
5
6 while data := api_client.get_users(page=page):
7 users.extend(data['users'])
8 if not data['has_next']:
9 break
10 page += 1
11
12 return users
This eliminates duplicate api_client.get_users() calls and removes the need to check if data exists before using it.
⚠️
The walrus operator is best suited for while loops and filtered comprehensions. Overuse can reduce code readability, particularly when chaining multiple assignment expressions.

Pattern Matching: Switch Statements on Steroids

Python 3.10 introduced structural pattern matching with match/case. This feature goes beyond simple switch statements, providing powerful pattern matching capabilities for complex data structures.
Pattern matching excels at destructuring nested data, eliminating verbose if-elif chains that check types, keys, and values.

Basic Matching

python
1def http_status_message(status_code):
2 match status_code:
3 case 200:
4 return "OK"
5 case 404:
6 return "Not Found"
7 case 500 | 502 | 503: # Multiple patterns
8 return "Server Error"
9 case _: # Default case
10 return "Unknown Status"

Structural Pattern Matching

Where pattern matching truly shines is destructuring complex data:
python
1def process_command(command):
2 match command:
3 # Match dictionary structure
4 case {"action": "create", "type": "user", "data": {"name": str(name)}}:
5 return create_user(name)
6
7 # Match with guards
8 case {"action": "delete", "id": int(id)} if id > 0:
9 return delete_item(id)
10
11 # Match sequences
12 case ["move", x, y] if isinstance(x, int) and isinstance(y, int):
13 return move_to(x, y)
14
15 # Match objects
16 case Point(x=0, y=0):
17 return "Origin"
18
19 # Capture patterns
20 case ["copy", *files, "to", destination]:
21 return copy_files(files, destination)

Pattern Matching Flow

Yes

No

Yes

No

Yes

No

Command Input

Match Dict?

Extract & Process

Match List?

Destructure & Handle

Match Object?

Object Action

Default Handler

Real-World Example: JSON API Response Handler

python
1def handle_api_response(response):
2 match response:
3 case {"status": "success", "data": data}:
4 return process_data(data)
5
6 case {"status": "error", "code": "AUTH_FAILED"}:
7 refresh_token()
8 return retry_request()
9
10 case {"status": "error", "code": code, "message": msg}:
11 logger.error(f"API Error {code}: {msg}")
12 return None
13
14 case {"status": "partial", "data": data, "errors": errors}:
15 log_errors(errors)
16 return process_partial_data(data)
17
18 case _:
19 raise ValueError(f"Unexpected response format: {response}")

functools: The Swiss Army Knife Module

The functools module provides higher-order functions and operations on callable objects. Beyond the commonly used wraps decorator, it contains several powerful utilities that can significantly improve code performance and design.

lru_cache: Automatic Memoization

The lru_cache decorator implements a Least Recently Used cache for function results. When decorated functions are called with the same arguments, cached results are returned immediately instead of recomputing.
This is particularly effective for recursive functions with overlapping subproblems, such as the classic Fibonacci sequence calculation.
python
1from functools import lru_cache
2import time
3
4# Without caching - slow recursive implementation
5def fibonacci_slow(n):
6 if n < 2:
7 return n
8 return fibonacci_slow(n-1) + fibonacci_slow(n-2)
9
10# With caching - lightning fast
11@lru_cache(maxsize=128)
12def fibonacci_fast(n):
13 if n < 2:
14 return n
15 return fibonacci_fast(n-1) + fibonacci_fast(n-2)
16
17# Performance comparison
18start = time.time()
19result1 = fibonacci_slow(35) # Takes ~3 seconds
20print(f"Slow: {time.time() - start:.2f}s")
21
22start = time.time()
23result2 = fibonacci_fast(35) # Takes ~0.00001 seconds
24print(f"Fast: {time.time() - start:.5f}s")
25
26# Cache statistics
27print(fibonacci_fast.cache_info())
28# CacheInfo(hits=33, misses=36, maxsize=128, currsize=36)
lru_cache is perfect for expensive computations with repeated inputs. Use it for API calls, database queries, or complex calculations. Python 3.9+ also has @cache for unlimited cache size.

singledispatch: Function Overloading in Python

The singledispatch decorator enables function overloading based on the type of the first argument. This eliminates lengthy if-elif chains that check object types and dispatch to different logic.
Unlike traditional function overloading in statically typed languages, singledispatch provides a Pythonic approach to polymorphism through registration of type-specific implementations.
python
1from functools import singledispatch
2import json
3
4@singledispatch
5def serialize(obj):
6 """Default serialization - just convert to string"""
7 return str(obj)
8
9@serialize.register(dict)
10def _(obj):
11 """Serialize dictionaries to JSON"""
12 return json.dumps(obj)
13
14@serialize.register(list)
15@serialize.register(tuple)
16def _(obj):
17 """Serialize sequences as JSON arrays"""
18 return json.dumps(list(obj))
19
20@serialize.register(datetime)
21def _(obj):
22 """Serialize datetime as ISO format"""
23 return obj.isoformat()
24
25# Custom class registration
26@serialize.register(User)
27def _(obj):
28 """Serialize User objects"""
29 return {
30 'id': obj.id,
31 'username': obj.username,
32 'created': serialize(obj.created_at) # Recursive!
33 }
34
35# Usage
36print(serialize({"name": "Alice"})) # JSON
37print(serialize([1, 2, 3])) # JSON array
38print(serialize(datetime.now())) # ISO format
39print(serialize(42)) # Falls back to str()
To add serialization for new types, simply register them with the decorator. The original function remains unmodified, demonstrating the open/closed principle in action.

partial: Function Currying

The partial function creates partial applications by fixing some arguments of a function. This is useful when repeatedly calling functions with common parameters, such as database connections with consistent host and port values.
Partial application allows pre-setting arguments, creating specialized versions of more general functions.
python
1from functools import partial
2import logging
3
4# Create specialized logging functions
5log_debug = partial(logging.log, logging.DEBUG)
6log_error = partial(logging.log, logging.ERROR)
7
8# Partial application for configuration
9def connect_to_db(host, port, username, password, database):
10 return f"Connected to {username}@{host}:{port}/{database}"
11
12# Create specialized connectors
13connect_to_prod = partial(connect_to_db,
14 host="prod.db.com",
15 port=5432)
16connect_to_dev = partial(connect_to_db,
17 host="localhost",
18 port=5432,
19 username="dev")
20
21# Use them
22prod_conn = connect_to_prod(username="admin",
23 password="secret",
24 database="myapp")
25dev_conn = connect_to_dev(password="devpass",
26 database="myapp_dev")
This transforms a 5-parameter function into specialized 2-3 parameter versions, reducing configuration errors and eliminating credential mix-ups between environments.

Context Managers: Beyond 'with open()'

While the with open() pattern is widely known for file handling, Python allows creation of custom context managers for any resource that requires setup and cleanup operations.
The contextlib module provides decorators and utilities to create context managers without implementing the full protocol.

The contextmanager Decorator

The @contextmanager decorator converts a generator function into a context manager. Code before the yield statement executes on entry, and code after executes on exit, even if exceptions occur.
python
1from contextlib import contextmanager
2import time
3import os
4
5@contextmanager
6def timed_operation(name):
7 """Measure and report operation time"""
8 print(f"Starting {name}...")
9 start = time.time()
10 try:
11 yield start # This is where the 'with' block executes
12 finally:
13 elapsed = time.time() - start
14 print(f"{name} took {elapsed:.2f} seconds")
15
16# Usage
17with timed_operation("data processing") as start_time:
18 process_large_dataset()
19 # Any exception here will still trigger the finally block
20
21@contextmanager
22def temporary_env_var(key, value):
23 """Temporarily set an environment variable"""
24 old_value = os.environ.get(key)
25 os.environ[key] = value
26 try:
27 yield
28 finally:
29 if old_value is None:
30 os.environ.pop(key, None)
31 else:
32 os.environ[key] = old_value
33
34# Usage
35with temporary_env_var("DEBUG", "true"):
36 run_tests() # Tests run with DEBUG=true
37# DEBUG is restored to original value

ExitStack: Dynamic Context Management

python
1from contextlib import ExitStack
2
3def process_multiple_files(filenames):
4 with ExitStack() as stack:
5 # Open all files
6 files = [
7 stack.enter_context(open(fname))
8 for fname in filenames
9 ]
10
11 # Process all files
12 results = []
13 for f in files:
14 results.append(process_file(f))
15
16 return results
17 # All files are automatically closed
18
19# Conditional context managers
20def maybe_profile(should_profile):
21 with ExitStack() as stack:
22 if should_profile:
23 stack.enter_context(profiler())
24
25 # Always use timer
26 stack.enter_context(timed_operation("processing"))
27
28 return expensive_operation()
ResourceContext ManagerCodeResourceContext ManagerCodeExecute with block__enter__()AcquireReturn resource__exit__()ReleaseClean up complete

Type Hints: Beyond Basic Annotations

Python's type system has evolved far beyond simple int and str annotations.

Protocol: Structural Subtyping (Duck Typing with Types)

python
1from typing import Protocol, runtime_checkable
2
3@runtime_checkable
4class Drawable(Protocol):
5 """Anything that can be drawn"""
6 def draw(self) -> None: ...
7
8class Circle:
9 def draw(self) -> None:
10 print("Drawing circle")
11
12class Square:
13 def draw(self) -> None:
14 print("Drawing square")
15
16def render(shape: Drawable) -> None:
17 shape.draw()
18
19# Both work without inheritance!
20render(Circle()) # OK
21render(Square()) # OK
22
23# Runtime checking
24assert isinstance(Circle(), Drawable) # True

TypedDict: Typed Dictionaries

python
1from typing import TypedDict, NotRequired, Required
2
3class UserDict(TypedDict):
4 id: int
5 username: str
6 email: str
7 is_active: bool
8 metadata: NotRequired[dict] # Optional key
9
10class APIResponse(TypedDict, total=False): # All keys optional
11 data: list[UserDict]
12 error: str
13 timestamp: float
14
15# Type checker knows the structure
16def process_user(user: UserDict) -> str:
17 return f"User {user['username']} ({user['email']})"
18
19# Type checker catches errors
20user: UserDict = {
21 'id': 1,
22 'username': 'alice',
23 'email': 'alice@example.com',
24 'is_active': True
25}

Literal and Union Types

python
1from typing import Literal, Union, overload
2
3LogLevel = Literal["DEBUG", "INFO", "WARNING", "ERROR"]
4
5def log(message: str, level: LogLevel = "INFO") -> None:
6 print(f"[{level}] {message}")
7
8# Type checker ensures valid values
9log("Starting", "DEBUG") # OK
10log("Problem", "CRITICAL") # Type error!
11
12# Overloaded function signatures
13@overload
14def process(data: str) -> str: ...
15
16@overload
17def process(data: int) -> int: ...
18
19@overload
20def process(data: list) -> list: ...
21
22def process(data: Union[str, int, list]):
23 if isinstance(data, str):
24 return data.upper()
25 elif isinstance(data, int):
26 return data * 2
27 else:
28 return data[::-1]

Descriptors: The Magic Behind Properties

Descriptors are what make @property, @staticmethod, and @classmethod work. You can create your own!
python
1class ValidatedAttribute:
2 """A descriptor that validates values"""
3 def __init__(self, validator):
4 self.validator = validator
5 self.name = None
6
7 def __set_name__(self, owner, name):
8 self.name = f"_{name}"
9
10 def __get__(self, obj, objtype=None):
11 if obj is None:
12 return self
13 return getattr(obj, self.name, None)
14
15 def __set__(self, obj, value):
16 if not self.validator(value):
17 raise ValueError(f"Invalid value: {value}")
18 setattr(obj, self.name, value)
19
20class User:
21 # Descriptors for validation
22 age = ValidatedAttribute(lambda x: 0 <= x <= 150)
23 email = ValidatedAttribute(lambda x: "@" in x)
24
25 def __init__(self, age, email):
26 self.age = age # Calls descriptor's __set__
27 self.email = email # Calls descriptor's __set__
28
29# Usage
30user = User(25, "alice@example.com") # OK
31user.age = -5 # Raises ValueError!

Lazy Properties with Descriptors

python
1class LazyProperty:
2 """Compute property value only once"""
3 def __init__(self, func):
4 self.func = func
5 self.name = func.__name__
6
7 def __get__(self, obj, objtype=None):
8 if obj is None:
9 return self
10
11 # Compute and cache the value
12 value = self.func(obj)
13 # Replace descriptor with computed value
14 setattr(obj, self.name, value)
15 return value
16
17class DataAnalysis:
18 def __init__(self, data):
19 self.data = data
20
21 @LazyProperty
22 def expensive_computation(self):
23 print("Computing... (this only happens once)")
24 return sum(x ** 2 for x in self.data)
25
26# Usage
27analysis = DataAnalysis(range(1000000))
28print(analysis.expensive_computation) # Computes
29print(analysis.expensive_computation) # Returns cached value

Descriptor Protocol

obj.attr

obj.attr = value

del obj.attr

Object

get()

set()

delete()

Descriptor

Data Classes: More Than Just init Generators

Data classes (Python 3.7+) are often underutilized beyond basic usage.

Advanced Data Class Features

python
1from dataclasses import dataclass, field, InitVar
2from typing import ClassVar
3import uuid
4
5@dataclass
6class Product:
7 name: str
8 price: float
9
10 # Class variable (not in __init__)
11 tax_rate: ClassVar[float] = 0.08
12
13 # Field with factory function
14 id: str = field(default_factory=lambda: str(uuid.uuid4()))
15
16 # Excluded from __init__ but available in __post_init__
17 validate: InitVar[bool] = True
18
19 # Computed field
20 tags: list[str] = field(default_factory=list)
21
22 def __post_init__(self, validate):
23 if validate and self.price < 0:
24 raise ValueError("Price cannot be negative")
25
26 # Computed property
27 self.price_with_tax = self.price * (1 + self.tax_rate)
28
29 # Cached property
30 @property
31 def display_price(self):
32 return f"${self.price:.2f}"
33
34# Frozen (immutable) data classes
35@dataclass(frozen=True)
36class Point:
37 x: float
38 y: float
39
40 def distance_from_origin(self):
41 return (self.x ** 2 + self.y ** 2) ** 0.5
42
43# Usage
44p1 = Point(3, 4)
45# p1.x = 5 # Error! Frozen dataclass

Data Classes with Inheritance

python
1@dataclass
2class Vehicle:
3 make: str
4 model: str
5 year: int
6
7@dataclass
8class Car(Vehicle):
9 doors: int = 4
10
11@dataclass
12class Truck(Vehicle):
13 payload_capacity: float
14 doors: int = 2
15
16# Comparison and sorting
17@dataclass(order=True)
18class Person:
19 # sort_index is used for comparisons
20 sort_index: float = field(init=False, repr=False)
21 name: str
22 age: int
23
24 def __post_init__(self):
25 # Sort by age, then name
26 self.sort_index = (self.age, self.name)
27
28people = [
29 Person("Alice", 30),
30 Person("Bob", 25),
31 Person("Charlie", 30)
32]
33people.sort() # Automatically uses sort_index

pathlib: File Paths as Objects

Stop concatenating strings with os.path.join()! pathlib (Python 3.4+) provides an object-oriented approach to file paths.
python
1from pathlib import Path
2
3# Creating paths
4project_dir = Path.home() / "projects" / "myapp"
5config_file = project_dir / "config.json"
6
7# Path operations
8if config_file.exists():
9 data = json.loads(config_file.read_text())
10
11# Writing files
12output_file = project_dir / "output" / "results.csv"
13output_file.parent.mkdir(parents=True, exist_ok=True)
14output_file.write_text("col1,col2\n1,2\n")
15
16# Iterating over files
17for python_file in project_dir.rglob("*.py"):
18 print(f"Found: {python_file.relative_to(project_dir)}")
19
20# Path properties
21print(config_file.suffix) # .json
22print(config_file.stem) # config
23print(config_file.parent) # /home/user/projects/myapp
24print(config_file.is_file()) # True

pathlib vs os.path

python
1# Old way with os.path
2import os
3base_dir = os.path.dirname(os.path.abspath(__file__))
4config_path = os.path.join(base_dir, "config", "settings.json")
5if os.path.exists(config_path):
6 with open(config_path) as f:
7 config = json.load(f)
8
9# New way with pathlib
10from pathlib import Path
11base_dir = Path(__file__).parent.absolute()
12config_path = base_dir / "config" / "settings.json"
13if config_path.exists():
14 config = json.loads(config_path.read_text())
💡
pathlib automatically handles platform differences (Windows vs Unix paths), making your code more portable.

collections: Beyond dict and list

The collections module has several underused gems.

ChainMap: Layered Dictionaries

python
1from collections import ChainMap
2
3# Configuration with defaults and overrides
4defaults = {'debug': False, 'port': 8080, 'host': 'localhost'}
5environment = {'port': 9000, 'debug': True}
6command_line = {'host': '0.0.0.0'}
7
8# ChainMap searches in order
9config = ChainMap(command_line, environment, defaults)
10
11print(config['host']) # '0.0.0.0' (from command_line)
12print(config['port']) # 9000 (from environment)
13print(config['debug']) # True (from environment)
14
15# Updates go to the first map
16config['new_key'] = 'value'
17print(command_line) # {'host': '0.0.0.0', 'new_key': 'value'}

defaultdict with Factories

python
1from collections import defaultdict
2from datetime import datetime
3
4# Nested defaultdict for 2D structures
5matrix = defaultdict(lambda: defaultdict(int))
6matrix[0][0] = 1
7matrix[5][5] = 2
8print(matrix[3][3]) # 0 (automatically created)
9
10# Tracking with timestamps
11access_log = defaultdict(lambda: {'count': 0, 'first': datetime.now()})
12
13def log_access(user_id):
14 access_log[user_id]['count'] += 1
15 if access_log[user_id]['count'] == 1:
16 access_log[user_id]['last'] = access_log[user_id]['first']
17 else:
18 access_log[user_id]['last'] = datetime.now()

itertools: Combinatorial Power

itertools provides memory-efficient tools for creating iterators.
python
1from itertools import (
2 chain, cycle, repeat, count,
3 accumulate, groupby, permutations,
4 combinations, product, islice
5)
6
7# Chain multiple iterables
8all_users = chain(
9 get_active_users(),
10 get_inactive_users(),
11 get_pending_users()
12)
13
14# Sliding window
15def sliding_window(iterable, n):
16 """Generate sliding windows of size n"""
17 it = iter(iterable)
18 window = list(islice(it, n))
19 yield tuple(window)
20 for item in it:
21 window.append(item)
22 window.pop(0)
23 yield tuple(window)
24
25# Usage
26data = [1, 2, 3, 4, 5]
27for window in sliding_window(data, 3):
28 print(window) # (1,2,3), (2,3,4), (3,4,5)
29
30# Grouping data
31data = [
32 {'type': 'A', 'value': 1},
33 {'type': 'A', 'value': 2},
34 {'type': 'B', 'value': 3},
35 {'type': 'A', 'value': 4},
36]
37
38# Must sort first for groupby!
39data.sort(key=lambda x: x['type'])
40for key, group in groupby(data, key=lambda x: x['type']):
41 items = list(group)
42 print(f"{key}: {items}")

itertools Combinations

[A, B, C]

permutations(2)

AB, AC, BA, BC, CA, CB

combinations(2)

AB, AC, BC

product(repeat=2)

AA, AB, AC, BA, BB, BC, CA, CB, CC

Named Tuples: Lightweight Classes

Named tuples provide a memory-efficient alternative to classes for simple data containers.
python
1from typing import NamedTuple
2from collections import namedtuple
3
4# Modern approach with typing
5class Point(NamedTuple):
6 x: float
7 y: float
8
9 def distance_from_origin(self) -> float:
10 return (self.x ** 2 + self.y ** 2) ** 0.5
11
12# Classic approach
13Color = namedtuple('Color', ['red', 'green', 'blue', 'alpha'],
14 defaults=[255]) # alpha defaults to 255
15
16# Usage
17p = Point(3, 4)
18print(p.x, p.y) # Access by name
19print(p[0], p[1]) # Access by index
20x, y = p # Unpacking
21
22color = Color(128, 0, 255) # alpha=255 (default)

slots: Memory Optimization

For classes with many instances, __slots__ can significantly reduce memory usage.
python
1class RegularPoint:
2 def __init__(self, x, y):
3 self.x = x
4 self.y = y
5
6class SlottedPoint:
7 __slots__ = ['x', 'y']
8
9 def __init__(self, x, y):
10 self.x = x
11 self.y = y
12
13# Memory comparison
14import sys
15regular = RegularPoint(1, 2)
16slotted = SlottedPoint(1, 2)
17
18print(sys.getsizeof(regular.__dict__)) # 296 bytes (on 64-bit Python)
19# slotted has no __dict__, saves memory per instance
20
21# Performance benefit for attribute access
22# Slotted classes have faster attribute access due to fixed memory layout
⚠️
Use __slots__ carefully. It prevents dynamic attribute addition and can complicate inheritance. Best for classes with many instances and fixed attributes.

Ellipsis (...): Not Just for Type Hints

The ... (Ellipsis) literal has several uses beyond type hints.
python
1# In numpy-style slicing
2class Matrix:
3 def __getitem__(self, key):
4 if key is ...:
5 return "All elements"
6 return f"Specific: {key}"
7
8m = Matrix()
9print(m[...]) # "All elements"
10
11# As a placeholder
12def not_implemented_yet():
13 ... # More explicit than 'pass'
14
15# In type hints
16from typing import Callable
17Handler = Callable[..., None] # Any args, returns None
18
19# As a sentinel value
20MISSING = ... # More unique than None
21
22def get_value(key, default=MISSING):
23 if default is not MISSING:
24 return cache.get(key, default)
25 return cache[key] # Raise KeyError if missing
26
27# In stub files and protocols
28class MyProtocol(Protocol):
29 def method(self) -> int: ... # Just the signature

Conclusion

These 12 Python features represent powerful tools for writing more efficient, readable, and maintainable code. From the walrus operator's concise assignments to pattern matching's elegant data destructuring, each feature addresses specific programming challenges.
The standard library contains many more underutilized features. Understanding and applying these tools appropriately can significantly improve code quality and developer productivity.
Consider gradually incorporating these features into your codebase where they provide clear benefits. Start with simple applications like @lru_cache for expensive computations or pathlib for file operations, then explore more advanced features as needed.