Today I completed my planning for my summer course for math teachers "Computer Science for Math Teachers"
-
So, they've added python to it to some degree do you think? That's interesting.
@futurebird it refers to it as “analysis”, it has special specific UI in which you can examine the python program and its output, and optionally download it to run on your own machine- a little like how it presents ui for other external services like image search results, image generation, image description- the llm itself doesn’t do those things but makes it seem like it is
-
@futurebird What are your thoughts on teaching them eg. Haskell? To me, Lambda Calculus feels like a natural gateway between math and programming
None of our students uses Haskell, and it's just a bit obscure. These are just math teachers who never program anything most of the time. I'm picking python since they will see it most often and it might work its way into their lessons because of that.
(I will implement Haskell education for the math teachers when I start teaching the fifth graders Dvorak ... )
-
@futurebird it refers to it as “analysis”, it has special specific UI in which you can examine the python program and its output, and optionally download it to run on your own machine- a little like how it presents ui for other external services like image search results, image generation, image description- the llm itself doesn’t do those things but makes it seem like it is
Well now I've got to try this too. I always try each new thing to understand it.
I mainly use LLMs to clean up my speech-to-text narration or to format lecture notes into lesson plans. They're good when I provide **all** the content, and decent at shortening and clarifying text. That said including the time needed to proofread the output it is only slightly faster than doing it from scratch.
-
Example: "How many three-digit integers contain at least one 2?"
Elegant permutation solution:
9*10*10 - 8*9*9 = 252
It's also fun to write a program:
three_digit=[]
for i in range(100,1000):
if "2" in str(i):
three_digit.append(i)
print(len(three_digit))It's a less trivial problem if you make it: "How many three-digit EVEN integers contain at lest one 2?"
(but it's still trivial in code. Just add "and i%2==0")
@futurebird Is 8*9*4 just a typo, or are you checking to see how alert we are?
9 × 10 × 10 – 8 × 9 × 9 = 252 (correct)
9 × 10 × 10 – 8 × 9 × 4 = 612 -
@futurebird Is 8*9*4 just a typo, or are you checking to see how alert we are?
9 × 10 × 10 – 8 × 9 × 9 = 252 (correct)
9 × 10 × 10 – 8 × 9 × 4 = 612Thanks!
-
@futurebird it refers to it as “analysis”, it has special specific UI in which you can examine the python program and its output, and optionally download it to run on your own machine- a little like how it presents ui for other external services like image search results, image generation, image description- the llm itself doesn’t do those things but makes it seem like it is
@bri_seven @futurebird
Presumably constraining the LLM output to valid python programs at the sampling stage, potentially giving a misleading impression about its capability -
I think one ought to be able to bounce in and out of both ways of seeing the problem seamlessly.
Use brute force to verify your theory. Use theory to make better brute force.
@futurebird one thing I've always appreciated in the intersection between coding and computer-science/mathematics is, when you can use a naïve approach initially, to get answers fast; but then have to switch to a more sophisticated algorithm to have any hope of scaling. I'll see if I can come up with some specific examples…
-
@futurebird one thing I've always appreciated in the intersection between coding and computer-science/mathematics is, when you can use a naïve approach initially, to get answers fast; but then have to switch to a more sophisticated algorithm to have any hope of scaling. I'll see if I can come up with some specific examples…
Back in the day when computers were rare, if I had to show people who were completely new to coding one simple thing you could program super-easily if you weren’t too worried about efficiency, and then refine to get better scaling, I always took them through a few versions of listing the prime numbers up to N.
-
@bri_seven @futurebird
Presumably constraining the LLM output to valid python programs at the sampling stage, potentially giving a misleading impression about its capabilityA LLM isn't a complier, and unless it has additional special case handling it cannot tell if a python program is valid enough to run.
It can just guess if it looks like programs people in the training text have said were valid and will run.
And perhaps you were aware of this but it's exactly the misconception I keep bumping into with what people ask LLMs to do.
-
Back in the day when computers were rare, if I had to show people who were completely new to coding one simple thing you could program super-easily if you weren’t too worried about efficiency, and then refine to get better scaling, I always took them through a few versions of listing the prime numbers up to N.
This activity is much more rewarding if you aren't also teaching what prime number even is at the same time. Though, that's sort of what I've been developing with the fifth graders. Learning about concepts like prime numbers through programming. It's very different from what I'll be doing with the math teachers in the summer.
For the math teachers the prime numbers are a safe anchor they understand and the code is the new thing. For the kids it's all new.
-
Example: "How many three-digit integers contain at least one 2?"
Elegant permutation solution:
9*10*10 - 8*9*9 = 252
It's also fun to write a program:
three_digit=[]
for i in range(100,1000):
if "2" in str(i):
three_digit.append(i)
print(len(three_digit))It's a less trivial problem if you make it: "How many three-digit EVEN integers contain at lest one 2?"
(but it's still trivial in code. Just add "and i%2==0")
@futurebird
len([n for n in range(100,1000) if '2' in str(n)]this is why python scares me and shouldn't be used for intro programming.
-
This activity is much more rewarding if you aren't also teaching what prime number even is at the same time. Though, that's sort of what I've been developing with the fifth graders. Learning about concepts like prime numbers through programming. It's very different from what I'll be doing with the math teachers in the summer.
For the math teachers the prime numbers are a safe anchor they understand and the code is the new thing. For the kids it's all new.
"That number, 19, is a prim. You can only factor it as 1 times itself."
"Don't you mean it's a prime?"
"No, it's prim, just not comfortable with any other factors but itself and 1. And there's nothing wrong with that as long as it doesn't look down on other numbers for having so many factorizations."19: "30 is such a ho. Disgusting."
"wow... so much for that."
-
This activity is much more rewarding if you aren't also teaching what prime number even is at the same time. Though, that's sort of what I've been developing with the fifth graders. Learning about concepts like prime numbers through programming. It's very different from what I'll be doing with the math teachers in the summer.
For the math teachers the prime numbers are a safe anchor they understand and the code is the new thing. For the kids it's all new.
@futurebird @gregeganSF @eigen teaching kids about primes is cool cuz you can show them simple to understand puzzles that mathematicians still haven't solved after 100s of years.
-
A LLM isn't a complier, and unless it has additional special case handling it cannot tell if a python program is valid enough to run.
It can just guess if it looks like programs people in the training text have said were valid and will run.
And perhaps you were aware of this but it's exactly the misconception I keep bumping into with what people ask LLMs to do.
@futurebird @bri_seven
Exactly — they can bolt a thing to the output part of an LLM to force it to only output valid python programs, but it doesn't make the LLM any smarter; it just forces it to output valid python programs -
@futurebird @bri_seven
Exactly — they can bolt a thing to the output part of an LLM to force it to only output valid python programs, but it doesn't make the LLM any smarter; it just forces it to output valid python programs@sabik @futurebird @bri_seven this is exactly what they do, and it’s surprisingly effective because of the feedback loop. Unlike the pure LLM output, it’s now closer to classic evolutionary design with a generative component plus a fitness component, and can iterate until it produces a working program. Of course this assumes the test is described correctly, and it only works for programs that can be tested that way, but when it works it’s impressive.
-
@sabik @futurebird @bri_seven this is exactly what they do, and it’s surprisingly effective because of the feedback loop. Unlike the pure LLM output, it’s now closer to classic evolutionary design with a generative component plus a fitness component, and can iterate until it produces a working program. Of course this assumes the test is described correctly, and it only works for programs that can be tested that way, but when it works it’s impressive.
That's interesting. I do wonder if a person who can precisely describe what program they want would need this help as much? I mean, I sometimes look up things like sorting algorithms or ways to do something that I know can be done faster than whatever I coded... and a LLM kind of does that for you and formats it a bit. Or do you think it's doing more than that with this process?