Algebra 2B

¡Supera tus tareas y exámenes ahora con Quizwiz!

Theorem 2.14 (The first isomorphism theorem).

Let φ: R → S be a ring homomorphism. Then there is a ring isomorphism φ(bar):(R/ Ker(φ))→ Im(φ). Proof: page 17 (Sketck)

go over question 6 on assignment 5

was distracted

R/∼

{[a] | a ∈ R}

The Gaussian integers Z[i]

{a + bi ∈ C | a,b ∈ Z} form a subring of the field C, so Z[i] is a ring. (Also an integral domain.)

R[[x]] (Along with definition of addition and multiplication)

{∑(k=0,∞) akx^k | ak ∈ R ∀k≥0} the set of all formal power series over the ring R, where addition and multiplication on R[[x]] are defined as follows: ∑(k=0,∞) akx^k + ∑(k=0,∞) bkx^k = ∑(k=0,∞) (ak+bk) x^k (∑(k=0,∞) akx^k)·(∑(k=0,∞) akx^k) = a0b0 + (a1b0 + a0b1)x + (a2b0 + a1b1 + a0b2)x2 +··· = ∑(k=0,∞) (∑(i+j=k)ajbj) x^k

Show that the (polynomial) evaluation map is a ring homomorphism

φ: R[x] → R : f |→ f(r) Proof in notes (example 2.6)

Why is the following not a ring homomorphism?

φ: Z → 2Z defined by φ(n) = 2n φ(nm) = 2nm (typically does not equal) 4nm = (2n)(2m) = φ(n)φ(m).

Proposition 1.31. Let I be an ideal in R, and define ∼ on R by setting a ∼ b if and only if a−b ∈ I. Then...

∼ is a congruence relation in which the equivalence classes are the cosets of I in R, i.e., we have [a] = a + I for all a ∈ R. In particular, [0] = I. Proof: page 12 of the notes

Proposition 5.19. Let α ∈ End(V ) be such that ∆α(t) = (λ−t)^r and mα(t) = (t−λ)s. For any nonzero vector v ∈ V , define e := e(v) ∈Z>0 to be the smallest positive integer such that (α−λid)^e v = 0, and write v1 =α−λid)^(e−1)v, v2 = (α−λid)^(e−2v), ..., ve−1 = (α−λid)v, ve = v. Then

(1) (v1,v2,...,ve) is a basis for the C-vector space W := C[α]v; (2) in this basis, the matrix for the linear map β := α|W ∈ End(W) is the e×e matrix J(λ,e) = (λ on the diagonals and 1's above) (3) we have that Eβ(λ) = Cv1, that mβ(t) = (t−λ)e and that ∆β(t) = (λ−t)e. Proof: page 44 (sketch)

Lemma 3.28. Let R be a UFD with field of fractions F, and suppose h ∈ R[x] 4 things

(1) If h = fg with f, g ∈ R[x] primitive, then h is primitive. (2) If h = f1f2 · · · fk where fj ∈ R[x] has content cj , then h has content c1c2 · · · ck. (3) If h is irreducible in F[x] and primitive in R[x], then h is irreducible in R[x]. (4) If h = g1g2 · · · gk where gj ∈ F[x], then h = c · f1f2 · · · fk where c ∈ R, each fj ∈ R[x] is primitive and gj = uj · fj for some unit uj ∈ F. Proof: not examinable

Examples of k-algebraS 4.3.

(1) Let k be a field. Then k = k · 1 is a k-algebra of dimension 1. (2) For n ≥ 1, the set Mn(k) of n × n matrices with coefficients in k is a k-algebra of dimension n². (3) The field C = R + Ri is an R-algebra that is a 2-dimensional vector space over R

What's the norm in R

(1) The norm of a ∈ R is absolute value |a| = √a², and since |a · b| = |a| · |b| for all a, b ∈ R we have that R is a normed R-algebra.

Example of constructing intermediate fields between R and C

(1) We have that R ⊆ C and that i ∈ C is a root of the irreducible polynomial x² + 1 ∈ R[x]. Here R[i] = R + Ri = C has basis (1, i).

Lemma 1.8. In any ring (R,+,·), we have...

(1) a·0 = 0 and 0 = 0·a for all a ∈ R; and (2) a·(−b) = −(a·b) and −(a·b) = (−a)·b for all a,b ∈ R. let a ∈ R. Since 0 is an additive identity, one of the distributive laws gives a·0 = a·(0 + 0) = a·0 + a·0. Adding −(a·0) on the left on both sides gives −(a·0) + a·0 = −(a·0) + a·0 + a·0. The left hand side is zero, and the associativity law gives that the right hand side is (−(a·0) + a·0) + a·0 = 0 + a·0 = a·0 as required. The second identity is similar. To prove (2), note that a·b + a·(−b) = a·(b + (−b)) = a·0 = 0. This means that a·(−b) is the additive inverse of ab, that is, a·(−b) = −(a·b). The second identity is similar.

Lemma 2.19. Let R be a ring with 1. Then either: ...

(1) char(R) = 0, in which case Z1R is isomorphic to Z; or (2) char(R) = n > 0, in which case Z1R is isomorphic to Zn. Proof: page 18

Definition 3.19 (UFD). An integral domain R is called a Unique Factorisation Domain (UFD) if

(1) every nonzero nonunit element in R can be written as the product of finitely many irreducibles in R; and (2) given two such decompositions, say r1 · · · rs = r'1· · · r't we have that s = t and, after renumbering if necessary, we have Rri = Rr'i for 1 ≤ i ≤ s

Lemma 2.4. If φ : R → S is a ring homomorphism then

(1) for a,b ∈ R, we have φ(b−a) = φ(b)−φ(a); (2) φ(0R) = 0S; (3) for a ∈ R, we have φ(−a) = −φ(a). Proof: (1), we have φ(b−a) + φ(a) = φ((b−a) + a)= φ(b + (a−a))= φ(b + 0) = φ(b),and add −φ(a) to both sides. For (2), substitute b = a in (1) to obtain φ(0R) = φ(a−a) = φ(a)−φ(a) = 0S. For part (3), substitute b = 0 into part (1) and use part (2) to obtain φ(−a) = φ(0R −a) = φ(0R)−φ(a) = 0S −φ(a) = −φ(a)

Definition 3.5 (Primes and irreducibles). Let R be an integral domain. Let p ∈ R be nonzero and not a unit. Then we say: (1) p is prime iff... (2) p is irreducible iff...

(1) p is prime iff for all a, b ∈ R, we have p|ab =⇒ p|a or p|b. (2) p is irreducible iff for all a, b ∈ R, we have p = ab =⇒ a or b is a unit. We say that p is reducible if it's not irreducible, i.e., if there exists a, b ∈ R such that p = ab where neither a nor b is a unit.

Lemma 5.15. For α ∈ End(V ), suppose V = V1 ⊕ V2 ⊕ · · · ⊕ Vk where V1, . . . , Vk are α-invariant subspaces. For 1 ≤ i ≤ k, write αi := α|Vi ∈ End(Vi). Then

(1) α = α1 ⊕ · · · ⊕ αk ∈BIG⊕(i=1,k) End(Vi); and (2) the minimal polynomial mα is the least common multiple of mα1, . . . , mαk. Proof not examinable

Example of constructing intermediate fields between Q and R

(2) We have that Q ⊆ R and that ₃√2 is a root of the irreducible polynomial x³ − 2 ∈ Q[x]. Here Q[₃√2] = Q + Q(₃√2) + Q((₃√2)² has basis (1,₃√2,(₃√2)²).

Definition 5.16 (Cyclic subspace generated by v).

(Cyclic subspace generated by v). For any vector space V and any v ∈ V , the cyclic subspace generated by v is the subspace C[α]v ={p(α)v ∈ V | p ∈C[t]}

Lemma 1.5. A nonempty subset H of a group (G,∗) is a subgroup if and only if

(H,∗) is a group. Proof on page 4 of notes (=>) Let H be a subgroup of (G,∗). Since H is nonempty, there exists a ∈ H and hence e = a∗a⁻¹ ∈ H by equation (1.1). (So we have existence of the identity) also a⁻¹ = e∗a⁻¹ ∈ H (by 1.1) (Existence of inverse) we've just shown that b⁻¹ ∈ H, so applying condition (1.1) to the elements a,b⁻¹ ∈ H gives a∗b = a∗(b⁻¹)⁻¹ ∈ H. In particular, ∗ is a binary operation on H, and since (G,∗) is a group, the operation∗on H is associative. (because it's associative on the whole of G which includes H) Now for (<=) let H be a subset of G such that (H,∗) is a group. the identity element e ∈ H, so H is nonempty. Let a,b ∈ H. Then b⁻¹ lies in H since H is a group, and since ∗ is a binary operation on H we have a∗b⁻¹ ∈ H as required.

Theorem 1.26 (Quotient rings). Let ∼ be a congruence on a ring R. Define addition and multiplication on the set R/∼ of equivalence classes as follows: for a,b ∈ R, define [a] + [b] := [a + b] and [a]·[b] := [a·b]. Then

(R/∼,+,·) is a ring with zero element [0]. Moreover: (1) if R is a ring with 1, then the element [1] makes R/∼ into a ring with 1; and (2) if R is commutative then so is R/∼. Proof on page 10 of notes

Lemma 1.18. Let S be a subset of a ring (R,+,·). Then S is a subring of R if and only if ...

(S,+,·) is a ring. Proof: exercise

Two formal power series ∑(k=0,∞) akx^k and ∑(k=0,∞) bkx^k coincide if and only if

(ak) = (bk)

What's the norm in C

(standard norm on R²) Examples 4.20 (page 37).

What's the norm in H

(standard norm on R⁴) Examples 4.20 (page 37).

Theorem 5.25 (Primary Decomposition)

). Let α: V → V be a linear operator and write mα = p1^n1· · · pk^nk, where p1, . . . , pk are the distinct monic irreducible factors of mα in k[t]. Let qi = pi^ni and let Vi = Ker(qi(α)). Then: (1) the subspaces V1, . . . , Vk are α-invariant and V = V1 ⊕ · · · ⊕ Vk; and (2) the maps αi = α|Vi for 1 ≤ i ≤ k satisfy α = α1 ⊕ · · · ⊕ αk and mαi = qi Proof: Non examinable

The field C has characteristic ... ?

0 and hence so do Z, Q, R

Proposition 2.20. The characteristic of an integral domain is either

0 or a prime. Proof. Let R be an integral domain. Notice first that since R ≠ {0}, we have char(R) ≠ 1. Suppose that n := char(R) is neither 0 nor a prime, i.e., n = r · s for some 1 < r, s < n. Then 0 = n · 1R = rs · 1R = (r · 1R) · (s · 1R), but since R is an integral domain it follows that either r · 1R = 0 or s · 1R = 0. Either case is impossible in a ring of characteristic n because r, s < n. Thus, the characteristic must be zero or prime after all.

What is the characteristic of the zero ring?

1 The zero ring R = {0} is actually a ring with 1 (!!), and it's the only ring for which char(R) = 1

Examples of constructing field extensions containing roots

4.17. page 36

Let V be an n-dimensional vector space over k. Let α: V → V be a linear operator and let A be the matrix representing α with respect to a given basis (v1, v2, . . . , vn) of V . Given a polynomial f = ∑(i=0,n) ait^i ∈ k[t], what is f(A)?

= a₀In + a₁A + a₂A² + · · · + anAⁿ for the n × n matrix obtained by substituting A for t (and formally replacing t 0 = 1 by the n × n matrix identity In). It is not hard to show that the map k[t] → Mn(k) defined by sending f |→ f(A) is a ring homomorphism

Z1R

= {n · 1R | n ∈ Z} ={· · · ,(−2)1R, −1R, 0R, 1R,(2)1R, · · · }

Abelian group

A group (G,∗) is abelian if a∗b = b∗a for all a,b ∈ G.

Group

A group is a pair (G,∗), where G is a set, ∗ is a binary operation on G and the following axioms hold: • (The associative law) (a∗b)∗c = a∗(b∗c) for all a,b,c ∈ G. • (Existence of an identity) There exists an element e ∈ G with the property that e∗a = a and a∗e = a for all a ∈ G. • (The existence of an inverse) For each a ∈ G there exists b ∈ G such that a∗b = b∗a = e.

Definition 4.1 (k-algebra)

A k-vector space V is called a k-algebra if it's also a ring,where the scalar product and the ring multiplication are compatible in the following sense. (4.1) λ(u · v) = (λu) · v = u · (λv) for all u, v ∈ V, λ ∈ k.

Corollary 5.27 (Diagonalisability).

A linear map α : V → V is diagonalisable iff mα(t) = (t−λ1)(t−λ2)···(t−λk) for distinct λ1,...,λk ∈k. Proof page 51. (sketch)

Definition 4.11 (Subfield and field extension).

A non-zero subring k ≠ {0} of a field K is a subfield if for each nonzero element a ∈ k, the multiplicative inverse of a in K lies in k. We also refer to k ⊆ K as a field extension. In this case, choose non-zero a ∈ k to write 1K = a · a −1 whence 1K ∈ k and then it is easy to see that k is a field in its own right with 1k = 1K. Conversely, if k is a non-zero subring of a field K that is a field (so there is a multiplicative identity in k and each non-zero a ∈ k has a multiplicative inverse in k) then k is a subfield: 1k = 1K and the inverses in k and K coincide.

Subgroup

A nonempty subset H of a group G is a subgroup of G iff (1.1) ∀ a,b ∈ H, we have a∗b⁻¹ ∈ H.

Definition 1.27 (Ideal).

A nonempty subset I of a ring R is an ideal in R if and only if ∀ a,b ∈ I, we have a−b ∈ I ∀ a ∈ I,r ∈ R, we have r·a,a·r ∈ I. Remark 1.28. This simply means that an ideal is an additive subgroup that is closed under multiplication by all elements of the ring. Notice that every ideal I in R is a subring of R. In particular, Lemma 1.17 implies that every ideal contains 0R.

Subring

A nonempty subset S of a ring R is a subring iff ∀ a,b ∈ S, we have a−b ∈ S. ∀ a,b ∈ S, we have a·b ∈ S. The sets of the form r + S = {r + s | s ∈ S} for r ∈ R are the cosets of S in R.

Definition 3.25 (Primitive polynomial).

A nonzero polynomial f ∈ R[x], with R a UFD, is primitive if its coefficients are coprime. More generally, we say f has content c ∈ R if c is an hcf of the coefficients of f (thus if c is a unit, f is primitive).

Definition 1.9 (Units in a ring with 1).

A ring (R,+,·) is called a ring with 1 (also called a unital ring) if there is a multiplicative identity, i.e., an element 1 ∈ R satisfying a·1 = 1·a = a for all a ∈ R. An element a ∈ R in a ring with 1 is a unit if it has a multiplicative inverse, i.e., if there exists b ∈ R such that a·b = b·a = 1.

The composition of two ring homomorphisms is?

A ring homomorphism Proof exercise sheet 3 q1

Definition 1.6 (Ring)

A ring is a triple (R,+,·), where R is a set with binary operations +: R×R → R (a,b) |→ a + b and ·: R×R → R (a,b) |→ a·b such that the following axioms hold. • (R,+) is an abelian group. Write 0 for the (unique) additive identity, and −a for the (unique) additive inverse of a ∈ R, so (a + b) + c = a + (b + c) for all a,b,c ∈ R; a + 0 = a for all a ∈ R; a + b = b + a for all a,b ∈ R; a + (−a) = 0 for all a ∈ R. • (The associative law under multiplication) (a·b)·c = a·(b·c) for all a,b,c ∈ R; • (The distributive laws hold) a·(b + c) = (a·b) + (a·c) for all a,b,c ∈ R; (b + c)·a = (b·a) + (c·a) for all a,b,c ∈ R.

An equivalence relation

An equivalence relation on R is a relation ∼ that is reflexive, symmetric and transitive

Definition 3.10 (PID).

An ideal I of R is a principal ideal if I = Ra for some a ∈ R. An integral domain R is a Principal Ideal Domain (PID) if every ideal in R is principal.

Lemma 1.20. If a subring S of an integral domain R contains the element 1, then S is...

An integral domain Proof. The only property of an integral domain R that is not necessarily inherited by every subring is the existence of 1, but this follows from the assumptions

Show that R[[x]] is a ring

As R is an abelian group with respect to the ring addition it follows readily that (R[[x]],+) is an abelian group in which the power series 0 = 0 + 0x + 0x2 +··· is the zero element. multiplication is associative : Let f=∑(k=0,∞) akx^k g=∑(k=0,∞) bkx^k h=∑(k=0,∞) ckx^k The coefficent of xⁿ in the product (fg)h is ∑(i+j+k=n) (aibj)ck = ∑(i+j+k=n) ai(bjck) since mult in R is associative which is the coefficient of xⁿ in f(gh) It follows that (fg)h = f(gh), so multiplication in R[[x]] is associative. Distributive law: The coefficent of xⁿ in f(g + h) is ∑(i +j=n) ai(bj + cj) = ∑(i+j=n) aibj + ∑(i+j=n) aicj which equals the coefficient of xⁿ in fg +fh, so f(g +h) = fg +fh. Similary one proves that (g + h)f = gf + hf. This completes the proof that (R[[x]],+,·) is a ring

What's the minimal polynomial of the matrix [0,1;1,0]

A² = I₂ and p(A) = 0 where p(t) = t² − 1. As A is not a diagonal matrix, we have that q(A) ≠ 0 for any q = t − λ. Hence mA(t) = t² − 1

Look at example 1.35

Caaan dooo!

Field of fractions of an integral domain

Consider the set T = {(a,b) ∈ R ×R | b 6= 0} together with two binary operations T ×T → T given by (a,b) + (c,d) := (ad + bc,bd) and (a,b)·(c,d) := (ac,bd). These operations are well defined - that is, the formulas each define a map from T ×T to T - precisely because R is an integral domain. Indeed, suppose otherwise, i.e., suppose that bd = 0. The fact that R is an integral domain forces either b = 0 or d = 0, but then either (a,b) 6∈ T or (c,d) ∉ T which is absurd.

The quaternions

Consider the vector space of dimension 4 over R with basis 1, i, j, k, that is H = R + Ri + Rj + Rk ={a + bi + cj + dk | a, b, c, d ∈ R}, where the R-bilinear product is determined from i² = j² = k² = −1, ij = k, jk = i, ki = j, ji = −k, kj = −i, ik = −j.

Corollary 5.9

Corollary 5.9. The minimal polynomial mα divides the characteristic polynomial ∆α. In fact the roots of mα are precisely the eigenvalues of α. Proof. The Cayley-Hamilton theorem gives that the characteristic polynomial ∆α lies in the kernel of the ring homomorphism Φα from (5.1). Since Ker(Φα) = k[t]mα, we have that mα divides ∆α. Therefore every root of mα is a root of ∆α, and hence an eigenvalue of α. Conversely, every eigenvalue of α is a root of mα by Lemma 5.4.

If v ∈ Eα(λ), that is, if α(v) = λv, then C[α]v =?

Cv

Corollary 3.22 (Fundamental Theorem of Arithmetic).

Every natural number greater than 1 is of the form Πpi^ni for distinct prime numbers pi and positive integers ni. The primes pi and their exponents ni are uniquely determined (up to order)

Proposition 3.7. Let R be an integral domain. Then every prime element is...

Every prime element is irreducible. Proof. Let p ∈ R be prime, and suppose p = ab. Then either p|a or p|b. Assume without loss of generality (we may swap the letters a and b if we want) that p|a, i.e., there exists c ∈ R such that a = pc. Then p · 1 = p = ab = pcb, and the cancellation property gives cb = 1, so b must be a unit. This shows that p is irreducible. (Part of sketch proof for 3.20)

Whats does the first isomorphism theorem mean?

Every ring homomorphism can be written as the composition of a surjective ring homomorphism, then an isomorphism, and finally an injective ring homomorphism as shown below (Square diagram on page 17)

(The ring Zn of integers mod n)

Example 1.23, have a look

Consider the linear operator α : C2 →C2, v |→ Av where A =(3/2,1/2;−1/2,1/2) satisfies ∆α(t) = (1 − t)², and mα(t) = (t − 1)². Following Proposition 5.19 express A as a jordan matrix (with change of basis)

Example 5.21 page 45

What are the possible decompositions of α ∈ End(V) in a complex vector space V of dimension 4, supposing that has mα(t) = (t − 5)² and ∆α(t) = (t − 5)⁴ .

Example 5.23 Since the degree of mα(t) is 2, we must have at least one largest block J(5, 2), so the possible decompositions of the 4-dimensional space V are J(5, 2) ⊕ J(5, 2) and J(5, 2) ⊕ J(5, 1) ⊕ J(5, 1). If we know in addition that gm(5) = 3 then we must have three blocks, so the second possibility applies.

Use Theorem 5.25 (Primary Decomposition) to decompose the rotation by θ radians about the z-axis matrix from Examples 5.12(3),

Example 5.26. Page 50

how to compute a basis for C4 that puts the matrix into normal form

Example 5.35

Definition 5.13 (Direct sum of linear maps and matrices

For 1 ≤ i ≤ k, let Vi be a vector space and let αi ∈ End(Vi). The direct sum of α1, . . . , αk is the linear map (α1 ⊕ · · · ⊕ αk): BIG⊕(1≤i≤k)Vi →BIG⊕(1≤i≤k) Vi defined as follows: each v ∈ BIG⊕(1≤i≤k) Vi can be written uniquely in the form v = v1+· · ·+vk for some vi ∈ Vi, and we define (α1 ⊕ · · · ⊕ αk)(v1 + · · · + vk) := α1(v1) + · · · + αk(vk).

Definition 5.11 (Invariant subspace

For a linear operator α : V → V, we say that a subspace W of V is α-invariant if α(W) ⊆ W. If W is α-invariant, then the restriction of α to W, denoted α|W ∈ End(W), is the linear operator α|W : W → W : w |→ α(w)

Mn(R)

For any ring R, let Mn(R) denote the set of all n×n matrices with coefficients in the ring R. Then Mn(R) is a ring with respect to usual addition and multiplication of square matrices. If R is a ring with 1 then so is Mn(R), but this ring is not commutative in general even if R is commutative

Definition 4.5 (General polynomial ring).

For n ≥ 1, the polynomial ring in n variables with coefficients in R is the set R[x1, . . . , xn] of all polynomials in x1, . . . , xn with coefficients in R, where for f =∑(I∈Nⁿ) aIx^I and g =∑(I∈Nⁿ) bIx^I we define f + g = ∑(I∈Nⁿ) (aI + bI )x^I and f · g = ∑(I∈Nⁿ) (∑(J+K=I) aJ · bK) x^I

Corollary 5.33 (Jordan normal form)

For α ∈ End(V ), write the characteristic polynomial as ∆α(t) = (λ1 −t)^r1 ···(λk −t)^rk. Then there exists a basis on V such that the matrix A for α expressed in this basis is JNF(α) := JNF(α1)⊕···⊕JNF(αk), where for 1 ≤ i ≤ k, the map αi is the restriction of α to Gα(λi). Proof. The Primary Decomposition Theorem gives V = Gα(λ1)⊕Gα(λ2)⊕···⊕Gα(λk) with the corresponding decomposition α = α1 ⊕···⊕αk Remark 5.34. The matrix A in Theorem 5.33 is called a Jordan Normal Form for α. One can show that the Jordan blocks in JNF(α) are unique up to the order in which we write the blocks.

Show that and inverse element in a group is unique

Given a ∈ G, if b,c ∈ G are both elements satisfying the inverse property then b = b∗e = b∗(a∗c) = (b∗a)∗c = e∗c = c.

Let R be a ring with 1. Then show that the multiplicative inverse is unique

Given a ∈ R, if b,c ∈ R are both elements satisfying multiplicative inverse property, then b = b∗1 = b∗(a∗c) = (b∗a)∗c = 1∗c = c.

Lemma 5.30. Let s be the multiplicity of the eigenvalue λ as a root of mα. Then

Gα(λ) = Ker(α−λid)t for all t ≥ s. Proof non examinable

Lemma 1.30. Let ∼ be a congruence relation on a ring R, and let I := [0] denote the congruence class of 0. Then...

I is an ideal in the ring R. Moreover: (1) for a,b ∈ R, we have a ∼ b ⇐⇒ a−b ∈ [0]; and (2) the congruence classes of ∼ are the cosets of I, i.e., [a] = a + [0] for all a ∈ R. Proof on exercise sheet 2

Lemma 3.24 (Highest common factors in a UFD)

If R is a UFD and a1, . . . , am ∈ R are not all zero, then they have an hcf c. (An hcf exists) Proof: not examinable

Prove that R[x] is a subring of R[[x]]

If f =∑(k=0,∞)akx^k, g =∑(k=0,∞)bkx^k are polynomials of degree m and n respectively, then f −g = ∑(k=0,∞)akx^k − ∑(k=0,∞) bkx^k = ∑(k=0,∞) (ak −bk)xk is a polynomial of degree at most max(m,n). and (∑(k=0,∞)akx^k)·(∑(k=0,∞)bkx^k) = ∑(k=0,∞)(∑(i+j=k)aibj)x^k is a polynomial of degree at most m + n.

Let φ: R → S be a ring homomorphism. the image of φ is the subset of S given by

Im(φ) = {φ(a) ∈ S | a ∈ R}.

Lemma 2.9 (Properties of the image). Let φ: R → S be a ring homomorphism. Then

Im(φ) is a subring of S. Moreover, φ is surjective iff Im(φ) = S. Proof. Again φ(0R) = 0S, so Im(φ) is nonempty. Let a,b ∈ Im(φ), so there exists c,d ∈ R such that a = φ(c) and b = φ(d). Then a−b = φ(c)−φ(d) = φ(c−d) by Lemma 2.4(1), and ab = φ(c)φ(d) = φ(cd). This gives a−b,ab ∈ Im(φ), so Im(φ) is a subring of S. That φ is surjective if and only if Im(φ) = S holds by definition

Lemma 4.12. Let k ⊆ K be a field extension. Then K is

K is a k-algebra. Proof: page 34

Let φ: R → S be a ring homomorphism. The kernel of φ is the subset of R given by

Ker(φ) = {a ∈ R | φ(a) = 0}

Lemma 2.8 (Properties of the kernel). Let φ: R → S be a ring homomorphism. Then...

Ker(φ) is an ideal of R. Moreover, φ is injective iff Ker(φ) = {0}. Proof page 15

Why is Gα(λ) = ker(α−λid)r where r is the algebraic multiplicity of λ?

Lemma 5.30 says Let s be the multiplicity of the eigenvalue λ as a root of mα. Then Gα(λ) = Ker(α−λid)t for all t ≥ s. r is at least s because the minimal polynomial divides the characteristic polynomial

Two fundamental ring homomorphisms

Let I be an ideal in a ring R, and consider the map π: R → R/I defined by setting π(a) = a + I This is a ring homomorphism, because π(a + b) = (a + b) + I = (a + I) + (b + I) = π(a) + π(b), and π(ab) = ab + I = (a + I)(b + I) = π(a)·π(b). It's clearly surjective, and π(a) = 0 ⇐⇒ a ∈ I. Therefore Im(π) = R/I and Ker(π) = I. Now let S be a subring of a ring R. Consider the map ι: S → R defined by sending each element s ∈ S to the same element considered as an element in R, i.e., ι(s) = s ∈ R. This is a ring homomorphism because ι(a + b) = a + b = ι(a) + ι(b) and ι(a·b) = a·b = ι(a)·ι(b). lt's clearly injective and it has image S ⊆ R, so Ker(ϕ) = {0} and Im(ϕ) = S.

Definition 1.32 (Quotient rings from ideals).

Let I be an ideal in a ring R. The quotient ring R/I is the set R/I = {a + I : a ∈ R} of cosets of I in R, where we define addition and multiplication in the ring R/I by (a + I) + (b + I) = (a + b) + I (a + I)·(b + I) = (a·b) + I.

Definition 2.21 (Sum & intersection of ideals)

Let I, J be ideals in a ring R. Then the sum of I & J is: I+J := {a+b ∈ R | a∈I & b∈J} & the intersection of I & J I∩J = {a∈R|a∈I & a∈J}

Corollary 3.29 (Gauss' Lemma)

Let R be a UFD with field of fractions F, and let h ∈ R[x]. Then h is irreducible in R[x] if and only if either it is an irreducible element of R, or it is primitive in R[x] and irreducible in F[x]. Proof: not examinable

Lemma 3.27 (Pulling out the content).

Let R be a UFD. A nonzero polynomial f ∈ R[x] has content c if and only if f = c · g with g ∈ R[x] primitive. In particular c and g are uniquely determined by f up to multiplication by a unit Proof: Let R be a UFD. A nonzero polynomial f ∈ R[x] has content c if and only if f = c · g with g ∈ R[x] primitive. In particular c and g are uniquely determined by f up to multiplication by a unit

Example 1.29 (Principal ideal).

Let R be a commutative ring and let a ∈ R. The set Ra := {r·a ∈ R | r ∈ R} (sometimes denoted <a> if the ring R is clear from the context) is an ideal in R; this is called the ideal generated by a, and every ideal of this form is called a principal ideal.

Theorem 2.25 Chinese remainder theorem

Let R be a commutative ring with 1 and I,J ideals in R with I+J = R. Then there is a ring isomorphism r/I∩J ≅ R/I + R/J Proof: Lecture 9

Definition 1.24 (Congruence relation).

Let R be a ring and let ∼ be an equivalence relation on R. We say that ∼ is a congruence iff for all a,b,a0,b0 ∈ R, we have (1.2) a ∼ a0 and b ∼ b0 =⇒ a + b ∼ a0 + b0 and a·b ∼ a0·b0. The equivalence classes of a congruence ∼ are called congruence classes. This says simply that one can add or multiply any two equivalence classes [a],[b] ∈ R/∼ by first adding or multiplying any representative of the equivalence classes in the ring R, and then taking the congruence class of the result.

Definition 2.16 (Characteristic of a ring with 1)

Let R be a ring with 1. The characteristic of R, denoted char(R), is a non-negative integer defined as follows; if there is a positive integer m such that m1R = 0R, then char(R) is the smallest such positive integer; otherwise, there is no such positive integer and we say that char(R) = 0.

highest common factor (hcf)

Let R be an integral domain. A common factor c of a1, . . . , am ∈ R is called a highest common factor (hcf) if for any other common factor b of a1, . . . , am we have b|c.

Euclidean domain

Let R be an integral domain. We say that R is a Euclidean domain if it has a Euclidean valuation.

Definition 2.12 (Ring isomorphism)Definition 2.12 (Ring isomorphism)

Let R, S be rings. A homomorphism φ: R → S is called an isomorphism if there is a ring homomorphism ψ: S → R such that ψ(φ(r)) = r for all r ∈ R and φ(ψ(s)) = s for all s ∈ S. Given an isomorphism φ: R → S, we say that R is isomorphic to S and write R ≅ S.

Definition 4.18 (Normed R-algebra).

Let V be an R-algebra with 1 such that V ≠ {0}. We say that V is a normed R-algebra if it is equipped with an inner product such that the corresponding norm satisfies ||u · v|| = ||u|| · ||v|| for all u, v ∈ V .

Definition 3.2 (Divisibility).

Let a, b ∈ R. We say that a divides b (equivalently, that b is divisible by a) if there exists c ∈ R such that b = ac. We write simply a|b

Corollary 4.16 (Construction of splitting fields).

Let k be a field and let f ∈ k[x] be nonconstant. Then there exists a field extension k ⊆ K and an element a ∈ K such that f(a) = 0. Moreover, f can be written as product of polynomials of degree 1 in K[x]. Proof. See Exercise Sheet 7.

What are the primes and irreducibles in a field?

Let k be a field. Every nonzero element in k is a unit, so k contains neither primes nor irreducibles. (since in the definition, it says Let p ∈ R be nonzero and not a unit)

Theorem 4.13 (Constructing intermediate fields)

Let k ⊆ K be a field extension, and let a ∈ K be a root of some nonzero polynomial in k[x]. The set k[a] := {f(a) ∈ K | f ∈ k[x]} is a field, with field extensions k ⊆ k[a] ⊆ K. In fact (1, a, a², . . . , a^(n−1)) is a basis for k[a] over k where n = min{deg(p) | p ∈ k[x] satisfies p(a) = 0}. (sketch)

Theorem 4.22 (Fermat's two square theorem and Lagrange's four square theorem).

Let n ∈ N. Then n is a sum of four integer squares, and n is a sum of two integer squares provided it has no prime factors congruent to 3 modulo 4.

Theorem 4.15 (Constructing field extensions containing roots)

Let p ∈ k[x] be irreducible in k[x]. The field extension k ⊆ K := k[x]/k[x]p has dimension n := deg(p) as a k-vector space, and the element a := [x] ∈ K in this new field is a root of p. Proof: page 36 (sketch)

Theorem 5.22 (Jordan normal form - special case).

Let α ∈ End(V ) be such that ∆α(t) = (λ − t)^r and mα(t) = (t − λ)^s. Then there exists a basis for V such that the matrix for α with respect to this basis is A := JNF(α) = J(λ, e1) ⊕ · · · ⊕ J(λ, em) where (1) m = gm (λ) is the number of Jordan blocks; (2) s = max{e1, . . . , em}; and (3) r = e1 + · · · + em Proof: not examinable

Definition 5.28 (Generalised eigenspace).

Let α: V → V be a linear map with eigenvalue λ. A nonzero vector v ∈ V is a generalised eigenvector with respect to λ if (α−λid)sv = 0 for some positive integer s. The generalised λ-eigenspace of V is Gα(λ) =v ∈ V : (α−λid)sv = 0 for some positive integer s.Remark 5.29. We have Eα(λ) ⊆ Gα(λ).

Theorem 2.11 (Universal property of the quotient map).

Let φ: R → S is a ring homomorphism and let I be an ideal in R satisfying I ⊆ Ker(φ). Then there exists a unique ring homomorphism φ: R/I → S such that the diagram (page 16) commutes, i.e., φ◦π = φ (here π: R → R/I is the quotient map from Example 2.10). Proof: page 16 (part of the first isomorphism sketch proof)

Every division ring is a field?

No, division rings need not be commutative so division rings need not be fields

Learn how to do sketch proofs

POTATO

Theorem 3.11 (Euclidean domains are PIDs).

Proof on page 25 (Sketch)

The ring Z is a subring of

Q which is a subring of R which is a subring of C under the usual operations of addition and multiplication.

Division ring

R a division ring if it is a ring with 1 in which 0 6= 1, such that every non-zero element is a unit, i.e., for all a ∈ R\{0}, there exists b ∈ R such that ab = 1 = ba.

commutative ring

R is a commutative ring if a·b = b·a for all a,b ∈ R.

Field

R is a field if it is a commutative division ring

integral domain

R is an integral domain if it is a commutative ring with 1 in which 0 ≠ 1, such that if a,b ∈ R satisfy ab = 0, then a = 0 or b = 0.

The ring of polynomials with coefficients in R.

R[x] :={∑(k=0,∞)akx^k ∈ R[[x]] | ak ≠ 0 for only finitely many k ≥ 0) (Subset of polynomials) In particular, by ignoring the terms with coefficient equal to zero, any polynomial can be written as a0 + a1x +···+ anxⁿ for some n ≥ 0. The degree of a nonzero polynomial is the largest n such that an ≠ 0.

Lemma 3.4 (Units don't change the ideal). Let R be an integral domain and let a, b ∈ R. Then...

Ra = Rb ⇐⇒ a = ub for some unit u ∈ R. In particular, R = Ru if and only if u is a unit in R. Proof. If Ra = Rb, then we have both Ra ⊆ Rb and Rb ⊆ Ra, hence b|a and a|b. Thus there exist u, v ∈ R such that a = ub and b = va. Putting these equations together shows that 1a = a = ub = uva. If a = 0, then b = 0 and there's nothing to prove. Otherwise, the cancellation law in the integral domain R gives uv = 1, so u is a unit in R. Conversely, suppose a = ub for some unit u ∈ R. Then a ∈ Rb, so Ra ⊆ Rb. Since u is a unit, we may multiply a = ub by u⁻¹ to obtain b = u⁻¹a. This gives b ∈ Ra and hence Rb ⊆ Ra. These two inclusions together give Ra = Rb as required. The final statement of the lemma follows from the special case a = 1.

Definition 2.24 (Direct product of rings)

RxS := {(r,s) | r∈R, s∈S} with (a,b)+(c,d) := (a+c,b+d) & (a,b)⋅(c,d) :=(ac,bd)

Show that in a normed algebra ||1V || = 1

The V ≠ {0} assumption gives 1V ≠ 0 and hence ||1V || ≠ 0. We have ||1V || = ||1V · 1V || = ||1V || · ||1V ||. Since the norm takes values in the integral domain R, the resulting equality ||1V || · (1 − ||1V ||) = 0 implies that ||1V || = 1.

Definition 5.5 (Characteristic polynomial and multiplicities of eigenvalues).

The characteristic polynomial of α: V → V is ∆α(t) = det (α−tid) = det (A−tIn), where A is a matrix representing α with respect to some basis. The algebraic multiplicity, am(λ), of an eigenvalue λ is the multiplicity of λ as a root of ∆α(t). The geometric multiplicity gm(λ) is the dimension of the eigenspace Eα(λ) = Ker(α − λid) = Ker(A − λIn). Remarks 5.6. (1) This characteristic polynomial of a linear operator α does not depend on the choice of matrix A representing α, so it's well-defined. (2) We have am(λ) ≥ gm(λ).

Why is Z₄ not an integral domain?

The commutative ring Z₄ = {[0],[1],[2],[3]} satisfies [2]·[2] = [4] = [0] and yet [2] ≠ [0], so Z4 is not an integral domain.

Give an example of a subring of a 'ring with 1' which isn't a 'ring with 1'.

The even integers Z2 (a subring of Z.)

Example of a ring homomorphism from Z to Z₂

The function φ: Z→Z2 defined by φ(n) =(0) if n is even 1 if n is odd

Definition 5.2 (Minimal polynomial)

The minimal polynomial of α: V → V is the monic polynomial mα ∈ k[t] of lowest degree such that mα(α) = 0. We also write mA and refer to the minimal polynomial of an n × n matrix A representing α.

Give to trivial invariant subspaces of any linear operatorer α: V → V

The subspaces {0} and V

Lemma 2.27. Define a relation ∼ on T by setting (a,b) ∼ (c,d) ⇐⇒ ad = bc.

Then for all a,a0,b,b0,c,c0,d,d0 ∈ R with b,b0,d,d0 6= 0, we have that (a,b) ∼ (a0,b0) and (c,d) ∼ (c0,d0) =⇒ ((a,b) + (c,d) ∼ (a0,b0) + (c0,d0) (a,b)·(c,d) ∼ (a0,b0)·(c0,d0) In other words, ∼ satisfies the conditions of being a congruence relation on T. (although we can't say that it is a congruence since T is not a ring; additive inverses don't exist in general) Proof not examinable

Theorem 2.29. Let R be an integral domain. The set F(R) with the binary operations from (2.4) above is.... Moreover, the map R → F(R) defined by sending a to a 1 is ....

Theorem 2.23. Let R be an integral domain. The set F(R) with the binary operations from (2.4) above is a field; this is called the field of fractions of R. Moreover, the map R → F(R) defined by sending a to a 1 is an injective homomorphism. proof: not examinable Note that T = {(a,b) ∈ R ×R | b ≠ 0} isn't a ring because if b is not a unit then it doesn't have an additive inverse, so we can't use thm 1.26 to show that F(R) is a ring

Theorem 4.21 (Classification of normed R-algebras)

There are exactly three normed R-algebras up to isomorphism, namely, R, C and H. Idea of proof. Let V be a normed R-algebra. • If 1, t are orthonormal in V , then t² = −1. • If 1, i, j are orthonormal in V , then so are 1, i, j, ij. Moreover ji = −ij. • If 1, i, j, ij, e are orthonormal in V , then (ij)e = −e(ij) = iej = −ije so (ij)e = 0 which is absurd. Thus dim V ∈ {1, 2, 4} and we get V ≅ R, C or H accordingly.

What two subspaces are α-invariant for the linear operator α: R³ → R³ that rotates every vector by θ radians anticlockwise around the z-axis

V1 := Re1 ⊕ Re2 and V2 := Re3 as α-invariant subspaces. The restriction α|V1: V1 → V1 is simply rotation by θ radians in the plane, while α|V2: V2 → V2 is the identity on the real line. look on page 41 at the structure of the matrix

5.5. The Jordan Decomposition Theorem.

We now tackle the general case, where α ∈ End(V ) need not have a single eigenvalue. Let's work over a field k that contains all of the eigenvalues of α, in which case we can decompose the minimal polynomial as mα(t) = (t−λ1)^s1 ·(t−λ2)^s2 ···(t−λk)^sk where λ1,...,λk are the distinct eigenvalues of α (recall that the roots of mα are exactly the eigenvalues of α); for example, we could use C, but using the results of Section 4 one can often get away with a much smaller field. In any event, the Primary Decomposition Theorem 5.25 implies that V = Ker(α−λ1id)^s1 ⊕Ker(α−λ2id)^s2 ⊕···⊕Ker(α−λkid)^sk is a decomposition of V as a direct sum of α-invariant subspaces.

Remark 5.10 (How to compute the minimal polynomial)

When working over C, Corollary 5.9 says that if λ1, . . . , λk are the distinct eigenvalues of λ and ∆α(t) = (λ1 − t)^r1· · ·(λk − t)^rk, then mα(t) = (t − λ1)^s1· · ·(t − λk)^sk with 1 ≤ si ≤ ri for all 1 ≤ i ≤ k. So the characteristic polynomial gives us a bound on powers so use trial and error from there.

Example 1.34 (Example of an Ideal in Z and a quotient ring from the ideal)

Zn and Zₙ = Z/Zn It's a commutative ring with 1 because Z is too

Corollary 4.9. Let k be a field. Then k[x1, . . . , xn] is

a UFD Proof. We know k is a PID, hence a UFD. Assume by induction that S := k[x1, . . . , xn−1] is a UFD, then S[xn] is a UFD by Theorem 3.30 and we're done by Proposition 4.8. Remark 4.10. Note that k[x1, . . . , xn] is not a PID for n ≥ 2, see Exercise Sheet 7

The ring Z is an integral domain, but it's not ...

a division ring, so it's not a field

Theorem 3.17. Let R be a PID. If p is irreducible then R/Rp is ...

a field. proof on page 27 (sketch)

A Euclidean valuation on R is...

a map ν : R r {0} → {0, 1, 2, . . .} such that: (1) for f, g ∈ R r {0} we have ν(f) ≤ ν(fg); and (2) for all f, g ∈ R with g ≠ 0, there exists q, r ∈ R such that f = qg + r and either r = 0 or r ≠ 0 and ν(r) < ν(g).

inner product on a real vector space V and corresponding norm

a positive definite symmetric bilinear form <· , · >: V × V → R. The corresponding norm is || · ||: V → R given by ||v|| =√<v, v>. Positive definiteness gives that ||v|| = 0 =⇒ v = 0.

Relation

a relation ∼ on R is a subset S ⊂ R×R, in which case we write a ∼ b ⇐⇒ (a,b) ∈ S.

rem 3.30. If R is a UFD, then the polynomial ring R[x] is

also a UFD Proof: not examinable

Remark 1.12. Every field k is ...

an integral domain. Indeed, if a,b ∈ k satisfy ab = 0 and if a ≠ 0, then b = 1·b = a⁻¹ab = a⁻¹ ·0 = 0. Also every field is a commutative ring and hence so are Q,R,C with respect to the usual addition and multiplication.

Theorem 5.7 (Cayley-Hamilton)

any A ∈ Mn(k) we have ∆A(A) = 0 ∈ Mn(k). Equivalently, for any linear α: V → V we have ∆α(α) = 0 ∈ End(V ). One can't argue that det (A − AIn) = det (0) = 0 and thus ∆A(A) = 0 You have to construct the polynomial and then sub in A. But it is true. Proof: page 40 (sketch)

Show that m Mn(k) is isomorphic to End (V)

assignment 3 question 4 (2)

Any statement about divisibility can be rephrased in terms of ideals as follows: Lemma 3.3. For a, b ∈ R we have a|b ⇐⇒ ?

b ∈ Ra ⇐⇒ Rb ⊆ Ra Proof. If a|b then there exists c ∈ R such that b = ca ∈ Ra. Since Ra is an ideal, it follows that rb ∈ Ra for all r ∈ R, giving Rb ⊆ Ra. Conversely, if Rb ⊆ Ra, then in particular, b ∈ Rb lies in Ra, and hence there exists c ∈ R such that b = ca, so a|b.

If both b and c are hcfs of a1, . . . , am, then

b|c and c|b, so b = uc for a unit u by Lemma 3.4.

If all common factors of a1, . . . , am are units, we say a1, . . . , am are

coprime

Remark 5.14. For 1 ≤ i ≤ k, let Ai ∈ Mni(k) be the matrix for a linear map αi with respect to some basis Bi of Vi. Then the matrix for the direct sum α1 ⊕ · · · ⊕ αk with respect to the basis B1 ∪ B2 ∪ · · · ∪ Bk of BIG⊕(1≤i≤k)Vi is...

direct sum (block matrix) A1 ⊕ · · · ⊕ Ak (diagonal block matrix)

Definition 3.14. Let R be a PID. Two elements a, b ∈ R are said to be coprime if...

every common factor is a unit; by this, we mean that if d|a and d|b, then d is a unit

Lemma 5.4. Let p be a polynomial such that p(α) = 0. Then...

every eigenvalue of α is a root of p. In particular every eigenvalue of α is a root of mα. Proof. Page 39

Let R be a ring and let x be a variable. A formal power series f over R is a formal expression

f = ∑(k=0,∞) akx^k = a₀ + a₁x + a₂x² + a₃x³ + ... with ak ∈ R for k ≥ 0 (No notion of convergence)

Lemma 3.1 (Cancellation property) Let R be a commutative ring with 1 such that 0 6= 1. Then R is an integral domain if and only if...

for all a, b, c ∈ R, we have ab = ac and a ≠ 0 =⇒ b = c Proof. First, let R be an integral domain, and suppose ab = ac and a 6= 0. Then 0 = ab + (−ac) = ab + a(−c) = a(b + (−c)). Since R is an integral domain and a ≠ 0, we have b + (−c) = 0, that is b = c. For the opposite implication, let R be a commutative ring with 1 such that 0 ≠ 1, and assume the cancellation property. Suppose a, b ∈ R satisfies ab = 0 and a ≠ 0. Then ab = 0 = a · 0, and since a ≠ 0 the cancellation property gives b = 0 as required.

Show that C[α]v is an α-invariant subspace of V

for p,q ∈ C[t] and λ ∈k, we have λ(p(α)v) + q(α)v = (λp + q)(α)v, so C[α]v is a subspace of V . It is also α-invariant since αp(α)v = u(α)v where u is the polynomial tp(t).

Lemma 2.22 I∩J & I+J are

ideals in R Proof: exercise

Let R be a ring with 1. Then show that the multiplicative identity is unique

if 1,¯1 are both multiplicative identity elements, then ¯1 = ¯1·1=1

Definition 2.1 (Ring homomorphism) A map φ: R → S is said to be a ring homomorphism

if and only if for all a,b ∈ R, we have φ(a + b) = φ(a) + φ(b) and φ(a·b) = φ(a)·φ(b).

Show that the identity element in a group is unique

if e,f ∈ G are two elements satisfying the identity property then f = e∗f = e,

Theorem 3.21. Let R be a PID. Then R

is a UFD proof page 28 but uniqueness part is not examinable (sketch other part)

The dimension of a k-algebra V

is the dimension of V as a vector space over k

Let λ be an eigenvalue of α. If v is an eigenvector for λ, then the one dimensional subspace kv is

is α-invariant because α(av) = aα(v) = aλv ∈ kv

and a nonempty subset W of k-vector space is a subalgebra if

it is both a subring and a vector subspace.

A function is bijective iff

it is invertible f : A → B. A function g : B → A is the inverse of f if f ◦ g = 1B and g ◦ f = 1A.

Proposition 3.20. Let R be a UFD. Then p ∈ R is irreducible if and only if

it is prime proof page 27 (sketch)

If the coefficient ring R is a field k, then the general polynomial ring k[x1, . . . , xn] is a ...

k-algebra with basis as a vector space equal to the set of all monomials x₁^i1 x₂^i2· · · xn^in | i1, . . . , in ∈ N; this vector space is not finite dimensional! As in Remark 4.2, multiplication of polynomials is determined by the bilinearity and multiplication of monomials: (x₁^i1 x₂^i2· · · xn^in) · (x₁^j1 x₂^j2· · · xn^jn) = x₁^(i1+j1) x₂^(i2+j2)· · · xn^(in+jn) .

General polynomial rings.

let n ≥ 1, let x1, . . . , xn be variables and let R be a ring. A polynomial f in x1, . . . , xn with coefficients in R is a formal sum (4.2) f(x1, . . . , xn) = ∑(i1,...,in≥0) ai1,...,in x₁^i1· · · xn^in, with coefficients ai1,...,in ∈ R for all tuples (i1, . . . , in) ∈ Nⁿ , where only finitely many of the ai1,...,in are nonzero. To avoid having to write so many indices, let's write aI := ai1,...,in and x^I := x₁^i1· · · xn^in for any n-tuple I = (i1, . . . , in) ∈ Nⁿ . Then every polynomial in x1, . . . , xn can be written in the form f =∑(I∈Nⁿ)aIx^I where only finitely many of the elements aI ∈ R are nonzero and where x^I := x₁^i1· · · xn^in

For any positive integer n, we have that char(Zn) = ?

n

Lemma 2.18. Let R be a ring of characteristic n > 0. Then...

n · a = 0 for all a ∈ R. Proof. For a ∈ R, we have n · a = a + · · · + a (n times) = (1R · a + · · · + 1R · a (n times) = (1R + · · · + 1R ) · a = 0R · a = 0R as required.

What's the minimal polynomial of f α = λid

p(α) = 0 where p(t) = t − λ, so mα(t) = t − λ.

Proposition 5.24 (Primary decomposition in the case k = 2)

page 46 Let α: V → V be a linear operator and suppose that the minimal polynomial satisfies mα = q1q2, where q1, q2 are monic and coprime. For 1 ≤ i ≤ 2, let Vi = Ker(qi(α)). Then: (1) the subspaces V1, V2 are α-invariant and satisfy V = V1 ⊕ V2; and (2) the maps αi = α|Vi for 1 ≤ i ≤ 2 satisfy α = α1 ⊕ α2 and mαi = qi. (sketch proof)

Theorem 5.32 (Jordan Decomposition).

page 52 Suppose that the characteristic and minimal polynomials are ∆α(t) = Π(1≤i≤k) (λi−t)^ri and mα(t) = Π(1≤i≤k) (t−λi)^si respectively. Then V = Gα(λ1) ⊕ · · · ⊕ Gα(λk), and if α = α1 ⊕ · · · ⊕ αk is the corresponding decomposition of α, then ∆αi(t) = (λi − t)^ri and mαi(t) = (t − λi)^si. Proof non-examinable

Proposition 3.16. Let R be a PID. Then every irreducible element in R is ...

prime proof page 26

Lemma 3.15. Let R be a PID and let a, b ∈ R be coprime. There exists...

r, s ∈ R such that 1 = ra + sb. Proof. Consider the ideal Ra + Rb. Since R is a PID, there exists d ∈ R such that Ra + Rb = Rd. In particular, a, b ∈ Rd, so d divides both a and b. Since a and b are coprime, it follows that d is a unit. Lemma 3.4 gives Rd = R and hence Ra + Rb = R. Since R is a ring with 1, there exists r, s ∈ R such that 1 = ra + sb as required. (part of sketch proof for 3.17)

For any ring R, both {0} and R are

subrings of R

equivalence class

the equivalence class of an element a ∈ R is the (nonempty) set [a] := {b ∈ R | b ∼ a} of elements that are equivalent to a. Every element lies in a unique equivalence class, and any two distinct equivalences classes are disjoint subsets of R; we say that the equivalence classes partition the set R

Let V be an n-dimensional vector space over k. Let α: V → V be a linear operator and let A be the matrix representing α with respect to a given basis (v1, v2, . . . , vn) of V . What is Φα ?

the map k[t] → Mn(k) defined by sending f |→ f(A) is a ring homomorphism. Recall from Exercise 3.4 that the rings End (V) and Mn(k) are isomorphic as rings as well as vector spaces over k of dimension n², and by precomposing with this isomorphism we obtain a ring homomorphism (5.1) Φα : k[t] → End(V ), f |→ f(α), where the multiplication in End(V ) is the composition of maps.

Proposition 4.8. The polynomial ring R[x1, . . . , xn] in n variables is isomorphic to

the polynomial ring S[xn] in the variable xn with coefficients in S = R[x1, . . . , xn−1]. Proof: on ex sheet 7 but not examinable

Lemma 5.1. The kernel of the ring homomorphism Φα is not...

the zero ideal Proof: page 39

Remarks 4.2. (1) For v ∈ V , the 'multiply on the left by v' map Tv : V → V given by Tv(u) = v · u is

u is a k-linear map; the same is true for 'multiply on the right'.

(2) Suppose that (vi)i∈I is a basis for the k-algebra V . To determine the multiplication on V , it suffices to know only the values of...

vi· vj for all i, j ∈ I, because (∑(i∈I) αivi)·(∑(j∈I)βjvj) = ∑(i,j∈I)(αiβj )(vi· vj) ).


Conjuntos de estudio relacionados

EXAM 1 (Chapters 1-5) Business Strategy

View Set

MICRO InQuizitive chapter 9, and 10, 11

View Set

Test one guidelines (z test/ t test)

View Set