type() and Class Creation: How Python Builds a Class at Runtime
type() tells you what an object is. It can also build classes from scratch at runtime. One function, two jobs — and the second one changes how you read Python.
type(order) returns <class '__main__.Order'>. I've been writing that in debug print statements since day one. You're telling me that's not what type() is actually for?
I'm telling you that's half of what type() is for. You've been using the one-argument form: pass it one object, it tells you what type that object is. That's real and useful. But type() also has a three-argument form that does something completely different. type(name, bases, namespace) — pass it a string, a tuple of parent classes, and a dictionary of attributes, and it builds you a brand new class. Not an instance. A class.
That's the same function? The same type() I've been calling in print statements?
The same function. One argument: inspection. Three arguments: construction. Python is using this every time you write a class statement. When you write class Order:, Python parses the class body into a dictionary, collects the base classes into a tuple, and then calls type('Order', (object,), namespace_dict) to produce the class. The class statement is syntax. type() is the mechanism.
So every class I've ever written — class Order, class Product, class PriorityOrder(Order) — Python was calling type() under the hood every single time?
Every time. Let me show you the transformation so you can see it directly.
# The class statement you write:
class Order:
def __init__(self, id, customer):
self.id = id
self.customer = customer
def __repr__(self):
return f"Order({self.id}, {self.customer})"
# What Python does with it:
namespace = {
'__init__': lambda self, id, customer: (
setattr(self, 'id', id) or setattr(self, 'customer', customer)
),
'__repr__': lambda self: f"Order({self.id}, {self.customer})"
}
Order = type('Order', (object,), namespace)
The class you see on the left. The type() call on the right. Same thing.
That lambda trick looks awkward. You wouldn't actually write __init__ that way.
Nobody would. That's not how you write it in practice. I'm showing you the mechanical equivalence, not the style guide. In real usage, you define the functions first, then pass them in the dict:
def _init(self, id, customer):
self.id = id
self.customer = customer
def _repr(self):
return f"Order({self.id}, {self.customer})"
Order = type('Order', (object,), {
'__init__': _init,
'__repr__': _repr,
})
o = Order('ORD-001', 'Alice')
print(o) # Order(ORD-001, Alice)
Full class. Created without a class statement. type() is the casting director — it takes a script, a list of understudies (base classes), and a props list (namespace dict), and produces a working role. The class statement is just the actor taking the stage from the director's hands.
When would you actually do this? In the codebase I work on, I've never seen someone call type() with three arguments.
Rarely in application code. But frameworks do it constantly. Django generates model classes from database schema. Pydantic builds validator classes from type annotations. SQLAlchemy creates table classes at import time. When you see type() called with three arguments in someone's library code, this is what's happening. The other reason to understand it: tomorrow's lesson is metaclasses. A metaclass is a class whose __call__ is invoked instead of type() when Python creates a new class. You cannot understand metaclasses without understanding that type() is what they replace.
Okay. So the real use case is when you want to create classes programmatically — based on data, not source code.
That's it exactly. Imagine you're processing an API response that describes order types. You don't know at write time how many order types there will be or what fields they have. But at runtime, you can build them:
base_order_fields = ['id', 'customer']
def make_order_class(name, extra_fields):
all_fields = base_order_fields + list(extra_fields)
def __init__(self, **kwargs):
for field in all_fields:
if field not in kwargs:
raise ValueError(f"Missing required field: {field}")
setattr(self, field, kwargs[field])
def __repr__(self):
vals = ', '.join(f'{f}={getattr(self, f)!r}' for f in all_fields)
return f"{name}({vals})"
namespace = {
'__init__': __init__,
'__repr__': __repr__,
'_fields': all_fields,
}
return type(name, (object,), namespace)
SubscriptionOrder = make_order_class('SubscriptionOrder', ['renewal_date'])
PriorityOrder = make_order_class('PriorityOrder', ['sla_hours'])
o1 = SubscriptionOrder(id='ORD-001', customer='Alice', renewal_date='2027-01-01')
o2 = PriorityOrder(id='ORD-002', customer='Bob', sla_hours=4)
print(o1) # SubscriptionOrder(id='ORD-001', customer='Alice', renewal_date='2027-01-01')
print(o2) # PriorityOrder(id='ORD-002', customer='Bob', sla_hours=4)
print(type(o1).__name__) # 'SubscriptionOrder'
Two completely different class shapes — both built from one function, at runtime, from a list of field names.
That's what I need for the order pipeline. We have five different order types and they're all defined with hand-written boilerplate that's almost identical. This is the factory pattern I've been writing manually.
That is the factory pattern, with the factory being the Python runtime itself. Now: there's one more piece. __init_subclass__.
That's a dunder I haven't seen before.
It's a hook. When you define a class that inherits from another class, Python calls __init_subclass__ on the parent. That parent can use it to register the subclass, validate it, or modify it — without the subclass knowing.
class OrderRegistry:
_registry = {}
def __init_subclass__(cls, order_type=None, **kwargs):
super().__init_subclass__(**kwargs)
if order_type is not None:
OrderRegistry._registry[order_type] = cls
print(f"Registered: {order_type} → {cls.__name__}")
class SubscriptionOrder(OrderRegistry, order_type='subscription'):
pass
class PriorityOrder(OrderRegistry, order_type='priority'):
pass
# Registered: subscription → SubscriptionOrder
# Registered: priority → PriorityOrder
print(OrderRegistry._registry)
# {'subscription': <class 'SubscriptionOrder'>, 'priority': <class 'PriorityOrder'>}
The subclasses do not explicitly register themselves. The parent's __init_subclass__ fires the moment the subclass is defined — at import time. This is how plugin systems work.
Amir's shared library has something exactly like this. There's a base class and all the order handlers register themselves. I never understood how they got into the registry without explicitly calling a register function anywhere. That's __init_subclass__.
That's __init_subclass__. You just reverse-engineered Amir's library design from first principles. He did not write magic. He used the hook.
So __init_subclass__ fires automatically on class definition. Is that because Python uses type() under the hood, and type() knows to call the parent's __init_subclass__?
Exactly right. The class creation machinery — type() — is responsible for calling __init_subclass__ after the new class is built. Which means understanding type() is understanding where all these hooks live. __init_subclass__ is called by type().__init__. Tomorrow we go one level deeper: if type() is what creates classes, what creates type()? And what happens when you replace type() with something else?
type() is an object too. Which means it has a type. Which means... something created type()?
Check it yourself. type(type) returns <class 'type'>. type() is its own metaclass. It creates itself. That's the recursion we're walking into tomorrow.
Okay. I need to sit with that for a minute. But I understand what we did today: the class statement is syntactic sugar for type(name, bases, namespace). I can call type() directly to build classes at runtime. __init_subclass__ is a hook that fires automatically when a subclass is defined. And tomorrow this connects to how Amir's metaclasses work.
That summary is exactly right. For the exercise: you're going to write make_order_class — a factory that takes a class name and a list of extra field names, and returns a fully working Order subclass built with type(). The subclass should accept all base fields plus the extras, store them as attributes, and be a real Python class you can instantiate and inspect.
Practice your skills
Sign up to write and run code in this lesson.
type() and Class Creation: How Python Builds a Class at Runtime
type() tells you what an object is. It can also build classes from scratch at runtime. One function, two jobs — and the second one changes how you read Python.
type(order) returns <class '__main__.Order'>. I've been writing that in debug print statements since day one. You're telling me that's not what type() is actually for?
I'm telling you that's half of what type() is for. You've been using the one-argument form: pass it one object, it tells you what type that object is. That's real and useful. But type() also has a three-argument form that does something completely different. type(name, bases, namespace) — pass it a string, a tuple of parent classes, and a dictionary of attributes, and it builds you a brand new class. Not an instance. A class.
That's the same function? The same type() I've been calling in print statements?
The same function. One argument: inspection. Three arguments: construction. Python is using this every time you write a class statement. When you write class Order:, Python parses the class body into a dictionary, collects the base classes into a tuple, and then calls type('Order', (object,), namespace_dict) to produce the class. The class statement is syntax. type() is the mechanism.
So every class I've ever written — class Order, class Product, class PriorityOrder(Order) — Python was calling type() under the hood every single time?
Every time. Let me show you the transformation so you can see it directly.
# The class statement you write:
class Order:
def __init__(self, id, customer):
self.id = id
self.customer = customer
def __repr__(self):
return f"Order({self.id}, {self.customer})"
# What Python does with it:
namespace = {
'__init__': lambda self, id, customer: (
setattr(self, 'id', id) or setattr(self, 'customer', customer)
),
'__repr__': lambda self: f"Order({self.id}, {self.customer})"
}
Order = type('Order', (object,), namespace)
The class you see on the left. The type() call on the right. Same thing.
That lambda trick looks awkward. You wouldn't actually write __init__ that way.
Nobody would. That's not how you write it in practice. I'm showing you the mechanical equivalence, not the style guide. In real usage, you define the functions first, then pass them in the dict:
def _init(self, id, customer):
self.id = id
self.customer = customer
def _repr(self):
return f"Order({self.id}, {self.customer})"
Order = type('Order', (object,), {
'__init__': _init,
'__repr__': _repr,
})
o = Order('ORD-001', 'Alice')
print(o) # Order(ORD-001, Alice)
Full class. Created without a class statement. type() is the casting director — it takes a script, a list of understudies (base classes), and a props list (namespace dict), and produces a working role. The class statement is just the actor taking the stage from the director's hands.
When would you actually do this? In the codebase I work on, I've never seen someone call type() with three arguments.
Rarely in application code. But frameworks do it constantly. Django generates model classes from database schema. Pydantic builds validator classes from type annotations. SQLAlchemy creates table classes at import time. When you see type() called with three arguments in someone's library code, this is what's happening. The other reason to understand it: tomorrow's lesson is metaclasses. A metaclass is a class whose __call__ is invoked instead of type() when Python creates a new class. You cannot understand metaclasses without understanding that type() is what they replace.
Okay. So the real use case is when you want to create classes programmatically — based on data, not source code.
That's it exactly. Imagine you're processing an API response that describes order types. You don't know at write time how many order types there will be or what fields they have. But at runtime, you can build them:
base_order_fields = ['id', 'customer']
def make_order_class(name, extra_fields):
all_fields = base_order_fields + list(extra_fields)
def __init__(self, **kwargs):
for field in all_fields:
if field not in kwargs:
raise ValueError(f"Missing required field: {field}")
setattr(self, field, kwargs[field])
def __repr__(self):
vals = ', '.join(f'{f}={getattr(self, f)!r}' for f in all_fields)
return f"{name}({vals})"
namespace = {
'__init__': __init__,
'__repr__': __repr__,
'_fields': all_fields,
}
return type(name, (object,), namespace)
SubscriptionOrder = make_order_class('SubscriptionOrder', ['renewal_date'])
PriorityOrder = make_order_class('PriorityOrder', ['sla_hours'])
o1 = SubscriptionOrder(id='ORD-001', customer='Alice', renewal_date='2027-01-01')
o2 = PriorityOrder(id='ORD-002', customer='Bob', sla_hours=4)
print(o1) # SubscriptionOrder(id='ORD-001', customer='Alice', renewal_date='2027-01-01')
print(o2) # PriorityOrder(id='ORD-002', customer='Bob', sla_hours=4)
print(type(o1).__name__) # 'SubscriptionOrder'
Two completely different class shapes — both built from one function, at runtime, from a list of field names.
That's what I need for the order pipeline. We have five different order types and they're all defined with hand-written boilerplate that's almost identical. This is the factory pattern I've been writing manually.
That is the factory pattern, with the factory being the Python runtime itself. Now: there's one more piece. __init_subclass__.
That's a dunder I haven't seen before.
It's a hook. When you define a class that inherits from another class, Python calls __init_subclass__ on the parent. That parent can use it to register the subclass, validate it, or modify it — without the subclass knowing.
class OrderRegistry:
_registry = {}
def __init_subclass__(cls, order_type=None, **kwargs):
super().__init_subclass__(**kwargs)
if order_type is not None:
OrderRegistry._registry[order_type] = cls
print(f"Registered: {order_type} → {cls.__name__}")
class SubscriptionOrder(OrderRegistry, order_type='subscription'):
pass
class PriorityOrder(OrderRegistry, order_type='priority'):
pass
# Registered: subscription → SubscriptionOrder
# Registered: priority → PriorityOrder
print(OrderRegistry._registry)
# {'subscription': <class 'SubscriptionOrder'>, 'priority': <class 'PriorityOrder'>}
The subclasses do not explicitly register themselves. The parent's __init_subclass__ fires the moment the subclass is defined — at import time. This is how plugin systems work.
Amir's shared library has something exactly like this. There's a base class and all the order handlers register themselves. I never understood how they got into the registry without explicitly calling a register function anywhere. That's __init_subclass__.
That's __init_subclass__. You just reverse-engineered Amir's library design from first principles. He did not write magic. He used the hook.
So __init_subclass__ fires automatically on class definition. Is that because Python uses type() under the hood, and type() knows to call the parent's __init_subclass__?
Exactly right. The class creation machinery — type() — is responsible for calling __init_subclass__ after the new class is built. Which means understanding type() is understanding where all these hooks live. __init_subclass__ is called by type().__init__. Tomorrow we go one level deeper: if type() is what creates classes, what creates type()? And what happens when you replace type() with something else?
type() is an object too. Which means it has a type. Which means... something created type()?
Check it yourself. type(type) returns <class 'type'>. type() is its own metaclass. It creates itself. That's the recursion we're walking into tomorrow.
Okay. I need to sit with that for a minute. But I understand what we did today: the class statement is syntactic sugar for type(name, bases, namespace). I can call type() directly to build classes at runtime. __init_subclass__ is a hook that fires automatically when a subclass is defined. And tomorrow this connects to how Amir's metaclasses work.
That summary is exactly right. For the exercise: you're going to write make_order_class — a factory that takes a class name and a list of extra field names, and returns a fully working Order subclass built with type(). The subclass should accept all base fields plus the extras, store them as attributes, and be a real Python class you can instantiate and inspect.