I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.
https://news.ycombinator.com/context?id=43948657
Thesis:
1. LLMs are bad at counting the number of r's in strawberry.
2. LLMs are good at writing code that counts letters in a string.
3. LLMs are bad at solving reasoning problems.
4. Prolog is good at solving reasoning problems.
5. ???
6. LLMs are good at writing prolog that solves reasoning problems.
Common replies:
1. The bitter lesson.
2. There are better solvers, ex. Z3.
3. Someone smart must have already tried and ruled it out.
Successful experiments:
> "4. Prolog is good at solving reasoning problems."
Plain Prolog's way of solving reasoning problems is effectively:
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.for person in [martha, brian, sarah, tyrone]: if timmy.parent == person: print "solved!"Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
>> You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
___________________
[1] More precisely, the resolution closure.
> "as the proof proceeds, the cardinality of the set of possible answers [1] decreases"
In the sense that it cuts off part of the search tree where answers cannot be found?
will never do the slow_computation - but if it did, it would come up with the same result. How is that the polar opposite of brute force, rather than an optimization of brute-force?member(X, [1,2,3,4]), X > 5, slow_computation(X, 0.001).If a language has tail call optimization then it can handle deeper recursive calls with less memory. Without TCO it would do the same thing and get the same result but using more memory, assuming it had enough memory. TCO and non-TCO aren't polar opposites, they are almost the same.
Rather, in the sense that during a Resolution-refutation proof, every time a new Resolution step is taken, the number of possible subsequent Resolution steps either gets smaller or stays the same (i.e. "decreases monotonically"). That's how we know for sure that if the proof is decidable there comes a point at which no more Resolution steps are left, and either the empty clause is all that remains, or some non-empty clause remains that cannot be reduced further by Resolution.
So basically Resolution gets rid of more and more irrelevant ...stuff as it goes. That's what I mean that it's "the polar opposite of brute force". Because it's actually pretty smart and it avoids doing the dumb thing of having to process all the things all the time before it can reach a conclusion.
Note that this is the case for Resolution, in the general sense, not just SLD-Resolution, so it does not depend on any particular search strategy.
I believe SLD-Resolution specifically (which is the kind of Resolution used in Prolog) goes much faster, first because it's "[L]inear" (i.e. in any Resolution step one clause must be one of the resolvents of the last step) and second because it's restricted to [D]efinite clauses and, as a result, there is only one resolvent at each new step and it's a single Horn goal so the search (of the SLD-Tree) branches in constant time.
Refs:
J. Alan Robinson, "A computer-oriented logic based on the Resolution principle" [1965 paper that introduced Resolution]
https://dl.acm.org/doi/10.1145/321250.321253
Robert Kowalski, "Predicate Logic as a Programming Language"
https://www.researchgate.net/publication/221330242_Predicate... [1974 paper that introduced SLD-Resolution]
I don't want to keep editing the above comment, so I'm starting a new one.
I really recommend that anyone with an interest in CS and AI read at least J. Alan Robinson's paper above. For me it really blew my mind when I finally found the courage to do it (it's old and a bit hard to read). I think there's a trope in wushu where someone finds an ancient scroll that teaches them a long-lost kung-fu and they become enlightened? That's how I felt when I read that paper, like I gained a few levels in one go.
Resolution is a unique gem of symbolic AI, one of its major achievements and a workhorse: used not only in Prolog but also in one of the two dominant branches of SAT-Solving (i.e. the one that leads from Hillary-Putnam to Conflict Driven Clause Learning) and even in machine learning, in of the two main branches of Inductive Logic Programming (which I study) and which is based on trying to perform induction by inverting deduction and so by inverting Resolution. There's really an ocean of knowledge that flows never-ending from Resolution. It's the bee's knees and the aardvark's nightgown.
I sincerely believe that the reason so many CS students seem to be positively traumatised by their contact with Prolog is that the vast majority of courses treat Prolog as any other programming language and jump straight to the peculiarities of the syntax and how to code with it, and completely fail to explain Resolution theorem proving. But that's the whole point of the language! What they get instead is some lyrical waxing about the "declarative paradigm", which makes no sense unless you understand why it's even possible to let the computer handle the control flow of your program while you only have to sort out the logic. Which is to say: because FOL is a computational paradigm, not just an academic exercise. No wonder so many students come off those courses thinking Prolog is just some stupid academic faffing about, and that it's doing things differently just to be different (not a strawman- actual criticism that I've heard).
In this day and age where confusion reigns about what even it means to "reason", it's a shame that the answer, that is to be found right there, under our noses, is neglected or ignored because of a failure to teach it right.
Excellent and Informative comments !
The way to learn a language is not via its syntax but by understanding the computation model and the abstract machine it is based on. For imperative languages this is rather simple and so we can jump right in and muddle our way to some sort of understanding. With Functional languages it is much harder (you need to know logic of functions) and is quite impossible with Logic languages (you need to know predicate logic) Thus we need to first focus on the underlying mathematical concepts for these categories of languages.
The Robert Kowalski paper Predicate Logic as a Programming Language you list above is the Rosetta stone of logic languages and an absolute must-read for everybody. It builds everything up from the foundations using implication (in disjunctive form), clause, clausal sentence, semantics, Horn clauses and computation (i.e. resolution derivation); all absolutely essential to understanding! This is the "enlightenment scroll" of Prolog.
I don't understand (the point of) your example. In all branches of the search `X > 5` will never be `true` so yeah `slow_computation` will not be reached. How does that relate to your point of it being "brute force"
>> but if it did, it would come up with the same result
Meaning either changing the condition or the order of the clauses. How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it.
The point is to compare a) evaluate all three lines (member, >5, slow_computation) then fail because the >5 test failed; against b) evaluate (member, >5) then fail. And to ask whether that's the mechanism YeGoblynQueyne is referring to. If so, is it valid to describe b as "the polar opposite" of a? They don't feel like opposites, merely an implementation detail performance hack. We can imagine some completely different strategy such as "I know from some other Constraint Logic propagation that slow_computation has no solutions so I don't even need to go as far as the X>5 test" which is "clever" not "brute".
> "How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it"
I know it doesn't, but there's no reason why it can't. In a C-like language it's common to do short-circuit Boolean logic evaluation like:
and if the first AND fails, the second is not tested. But if the language/implementation doesn't have that short-circuit optimisation, both tests are run, the outcome doesn't change. The short-circuit eval isn't the opposite of the full eval. And yes this is nitpicking the term "polar opposite of" but that's the relevant bit about whether something is clever or brute - if you go into every door, that's brute. If you try every door and some are locked, that's still brute. If you see some doors have snow up to them and you skip the ones with no footprints, that's completely different.A && B && C
Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.% "the boy eats the apple" eats(boy, apple).The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
"large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:
> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"
which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.
This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
> Also, that you push Python and JavaScript
I didn't push them.
> Those are terrible languages to try to graft to anything.
Web browsers, Blender, LibreOffice and Excel all use those languages for embedded scripting. They're fine.
> Just because you only know those 2 languages doesn't make them good choices for something like this.
You misunderstood my claim and are refuting something different. I said there is more training data for LLMs to use to generate Python and JavaScript, than Prolog.
I'm not. Python and JS are scripting languages. And in this case, we want something that models formal logic. We are hammering in a nail, you picked up a screwdriver and I am telling you to use a claw hammer.
What does this comment even mean? A claw hammer? By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.
> By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.
So is Brainfuck.
Turing equivalence does not imply that languages are equally useful choices for any particular application.
But we kinda don't use python for a database query over sql do we?
I use an ORM every day.
> I have no idea how to graft Prolog to an LLM
Wrapping either the SWI prolog MQI, or even simpler an existing Python interface like like janus_swi, in a simple MCP is probably an easy weekend project. Tuning the prompting to get an LLM to reliably and effectively choose to use it when it would benefit from symbolic reasoning may be harder, though.
No call for talking down at people. No one has ever been convinced by being belittled.
We would begin by having a Prolog server of some kind (I have no idea if Prolog is parallelized but it should very well be if we're dealing with Horn Clauses).
There would be MCP bindings to said server, which would be accessible upon request. The LLM would provide a message, it could even formulate Prolog statements per a structured prompt, and then await the result, and then continue.
> Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
By grafting LLM into Prolog and not other way around ?
Of course it does "reasoning", what do you think reasoning is? From a quick google: "the action of thinking about something in a logical, sensible way". Prolog searches through a space of logical proposition (constraints) and finds conditions that lead to solutions (if one exists).
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist. (b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems. (b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
Of course it doesn't "do reasoning", why do you think "following the instructions you gave it in the stupidest way imaginable" is 'obviously' reasoning? I think one definition of reasoning is being able to come up with any better-than-brute-force thing that you haven't been explicitly told to use on this problem.
Prolog isn't "thinking". Not about anything, not about your problem, your code, its implementation, or any background knowledge. Prolog cannot reason that your problem is isomorphic to another problem with a known solution. It cannot come up with an expression transform that hasn't been hard-coded into the interpreter which would reduce the amount of work involved in getting to a solution. It cannot look at your code, reason about it, and make a logical leap over some of the code without executing it (in a way that hasn't been hard-coded into it by the programmer/implementer). It cannot reason that your problem would be better solved with SLG resolution (tabling) instead of SLD resolution (depth first search). The point of my example being pseudo-Python was to make it clear that plain Prolog (meaning no constraint solver, no metaprogramming), is not reasoning. It's no more reasoning than that Python loop is reasoning.
If you ask me to find the largest Prime number between 1 and 1000, I might think to skip even numbers, I might think to search down from 1000 instead of up from 1. I might not come up with a good strategy but I will reason about the problem. Prolog will not. You code what it will do, and it will slavishly do what you coded. If you code counting 1-1000 it will do that. If you code Sieve of Eratosthenes it will do that instead.
The disagreement you have with the person you are relying to just boils down to a difference in the definition of "reasoning."
Its a Horn clause interpreter. Maybe lookup what that is before commenting on it. Clearly you don't have a good grasp of Computer Science concepts or math based upon your comments here. You also don't seem to understand the AI/ML definition of reasoning (which is based in formal logic, much like Prolog itself).
Python and Prolog are based upon completely different kinds of math. The only thing they share is that they are both Turing complete. But being Turing complete isn't a strong or complete mathematical definition of a programming language. This is especially true for Prolog which is very different from other languages, especially Python. You shouldn't even think of Prolog as a programming language, think of it as a type of logic system (or solver).
None of that is relevant.
Contrary to what everyone else is saying, I think you're completely correct. Using it for AI or "reasoning" is a hopeless dead end, even if people wish otherwise. However I've found that Prolog is an excellent language for expressing certain types of problems in a very concise way, like parsers, compilers, and assemblers (and many more). The whole concept of using a predicate in different modes is actually very useful in a pragmatic way for a lot of problems.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
The reason why you can write parsers with Prolog is because you can cast the problem of determining whether a string belongs to a language or not as a proof, and, in Prolog, express it as a set of Definite Clauses, particularly with the syntactic sugar of Definite Clause Grammars that give you an executable grammar that acts as both acceptor and generator and is equivalent to a left-corner parser.
Now, with that in mind, I'd like to understand how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?
Clearly people write parsers in C and C++ and Pascal and OCAML, etc. What does it mean to come in with "the reason you can write parsers with Prolog..."? I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic. Like saying that Lisp map() is better than Python map() because Lisp map is based on formal Lambda Calculus and Python map is an inferior imitation for blub programmers. When a programmer maps a function over a list and gets a result, it's a distinction without a difference. When a programmer writes a getchar() peek() and goto state machine parser with no formalism, it works, what difference does the formalism behind the implementation practically make?
Yes maybe the Prolog way means concise code is easier for a human to tell whether the code is a correct expression of the intent, but an LLM won't look at it like that. Whatever the formalism brings, it isn't enough that every parser task is done in Prolog in the last 50 years. Therefore it isn't any particular interest or benefit, except academic.
> both acceptor and generator
Also academically interesting but practically useless due to the combinatorial explosion of "all possible valid grammars" after the utterly basic "aaaaabbbbbbbbbbbb" examples.
> "how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?"
If drawing a painting is art, is it art if a computer pulls up a picture of a painting and shows it on screen? No. If a human coded the proof into a computer, the human is reasoning, the computer isn't. If the computer comes up with the proof, the computer is reasoning. Otherwise you're in a situation where dominos falling over is "doing reasoning" because it can be expressed formally as a chain of connected events where the last one only falls if the whole chain is built properly, and that's absurdum.
> If a human coded the proof into a computer, the human is reasoning, the computer isn't. ... If the computer comes up with the proof, the computer is reasoning.
That is exactly what "formal logic programming" is all about. The machine is coming up with the proof for your query based on the facts/rules given by you. Therefore it is a form of reasoning.
Reasoning (cognitive thinking) is expressed as Arguments (verbal/written premises-to-conclusions) a subset of which are called Proofs (step-by-step valid arguments). Using Formalization techniques we have just pushed some of those proof derivations to a machine.
I pointed this out in my other comment here https://news.ycombinator.com/item?id=45911177 with some relevant links/papers/books.
See also Logical Formalizations of Commonsense Reasoning: A Survey (from the Journal of Artificial Intelligence Research) - https://jair.org/index.php/jair/article/view/11076
With Prolog, the proof is carried out by the computer, not a human. A human writes up a theory and a theorem and the computer proves the theorem with respect to the theory. So I ask again, how is carrying out a proof not reasoning?
>> I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic.
That's not a great way to have a discussion.
The word "reason" came into this thread with the original comment:
I agree with you. In Prolog "?- 1=1." is reasoning by definition. Then 4. becomes "LLMs should emit Prolog because Prolog is good at executing Prolog code".3. LLMs are bad at solving reasoning problems. 4. Prolog is good at solving reasoning problems.I think that's not a useful place to be, so I was trying to head off going there. But now I'll go with you - I agree it IS reasoning - can you please support your case that "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?
This is not my claim:
>> "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?
I said what I think about LLMs generating Prolog here:
https://news.ycombinator.com/item?id=45914587
But I was mainly asking why you say that Prolog's execution is "not reasoning". I don't understand what you mean that '"?- 1=1." is reasoning by definition' and how that ties-in with our discussion about Prolog reasoning or not.
"?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D. This is the point you refused to move on from until I agreed. So I agreed. So we could get back to the interesting topic.
A topic you had no interest in, only interest dragging onto a trangent and grinding it down to make ... what point, exactly? If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not. When I tried to say in advance that this wouldn't be a useful direction and I didn't want to go here, you said it was " not a great way to have a discussion". And now having dragged me off onto this academic tangent, you dismiss it as "I wasn't interested in that other topic anyway". Annoying.
I'm sorry you find my contribution to the discussion annoying, but how should I feel if you just "agree" with me as a way to get me to stop arguing?
But I think your annoyance may be caused by misunderstanding my argument. For example:
>> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
Everything is not reasoning, nor is executing any code reasoning, but "executing Prolog code" is, because executing Prolog code is a special case of executing code. The reason for that is that Prolog's interpreter is an automated theorem prover, therefore executing Prolog code is carrying out a proof; in an entirely literal and practical sense, and not in any theoretical or abstract sense. And it is very hard to see how carrying out a proof automatically is "not reasoning".
I made this point in my first comment under yours, here:
https://news.ycombinator.com/item?id=45909159
The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover; it doesn't apply to C because its compiler is not an automated theorem prover; and so on, and so forth. Executing code in any of those languages is not reasoning, except in the most abstract and, well, academic, sense, e.g. in the context of the Curry-Howard correspondence. But not in the practical, down-to-brass-tacks way it is in Prolog. Calling what Prolog does reasoning is not a definition of reasoning that's too broad to be useful, as you say. On the contrary, it's a very precise definition of reasoning that applies to Prolog but not to most other programming languages.
I think you misunderstand this argument and as a consequence fail to engage with it and then dismiss it as irrelevant because you misunderstand it. I think you should really try to understand it, because it's obvious you have some strong views on Prolog which are not correct, and you might have the chance to correct them.
I absolutely have an interest in any claim that generating Prolog code with LLMs will fix LLMs' inability to reason. Prolog is a major part of my programming work and research.
> "?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D.
This is the dumbest thing i have read yet on HN. You are absolutely clueless about this topic and are merely arguing for argument's sake.
> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
What does this even mean? It has already been pointed out that Prolog does a specific type of formalized reasoning which is well understood. The fact that there are other formalized models to tackle subdomains of "Commonsense Reasoning" does not detract from the above. That is why folks are trying to marry Prolog (predicate logic) to LLMs (mainly statistical approaches) to get the best of both worlds.
User "YeGoblynQueenne" was being polite in his comments but for some reason you willfully don't want to understand and have come up with ridiculous examples and comments which only reflect badly on you.
You call it the dumbest thing you have ever read, and say that I know nothing - but you agree that it is a correct statement ("Prolog does a specific type of formalized reasoning").
> "What does this even mean?"
For someone who is so eager to call comments dumb, you sure have a lot of not-understanding going on.
1. Someone said "Prolog is good at reasoning problems"
2. I said it isn't any better than other languages.
3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
> "are merely arguing for argument's sake."
Presumably you are arguing for some superior purpose?
The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python - given the condition that LLMs don't have to care about conciseness or expressivity or readability in the same way humans do. For one example I say it would no better for an LLM to solve an Einstein Puzzle one way or the other. The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
You edited your comment without any indication tags which is dishonest. However, my previous response at https://news.ycombinator.com/item?id=45939440 is still valid. This is an addendum to that;
> The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python
I have no interest in trying to change your mind since you simply do not have the first idea about what Prolog is doing vis-a-vis any other non-logic programming language. You have to have some basic knowledge before we can have a meaningful discussion.
However, in my previous comment here https://news.ycombinator.com/item?id=45712934 i link to some usecases from others. In particular; the casestudy from user "bytebach" is noteworthy and explains exactly what you are asking for.
> The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
This is your dishonest edit without notification. I refuse to suffer wilful stupidity and hence retorted in a pointed manner; that was the only way left to get the message across. We had given you enough data/pointers in our detailed comments none of which you seem to have even grasped nor looked into. In a forum like this, if we are to learn from each other, both parties must put forth effort to understand the other side and articulate one's own position clearly. You have failed on both counts in this thread.
> but you agree that it is correct.
No, i did not; do not twist nor misrepresent my words. Your example had nothing whatsoever to do with "Reasoning" and hence i called it dumb.
> you sure have a lot of not-understanding going on.
Your and my comments are there for all to see. Your comments are evidence that you are absolutely clueless on Reasoning, Logic Programming Approaches and Prolog.
> 1. Someone said "Prolog is good at reasoning problems"
Which is True. But it is up to you to present the world-view to Prolog in the appropriate Formal manner.
> 2. I said it isn't any better than other languages.
Which is stupid. This single statement establishes the fact that you know nothing about Logic Programming nor the aspect of Predicate Logic it is based on.
> 3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
Which is True and not a "gotcha". You have no definite understanding of what the word "Reasoning" means in the context of Prolog. We have explained concepts and pointed you to papers none of which you are interested in studying nor understanding.
> 4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
What does this even mean? This is just nonsense verbiage.
> Presumably you are arguing for some superior purpose?
Yes. I am testing my understanding of Predicate Logic/Logic Programming/Prolog against others. Also whether others have come up with better ways of application in this era of LLMs i.e. what are the different ways to use Prolog with LLMs today?.
I initially thought you were probably wanting a philosophical discussion of what "Reasoning" means and hence pointed to some relevant articles/papers but i am now convinced you have no clue about this entire subject and are really making up stuff as you go.
You are wasting everybody's time, testing their patience and coming across as totally ignorant on this domain.
Even in your example (which is obviously not correct representation of prolog), that code will work X orders magnitude faster and with 100% reliability compared to much more inferior LLM reasoning capabilities.
This is not the point though
Algorithmically there's nothing wrong with using BFS/DFS to do reasoning as long as the logic is correct and the search space is constrained sufficiently. The hard part has always been doing the constraining, which LLMs seem to be rather good at.
> This is not the point though
could you expand what is the point? That authors opinion without much justification is that this is not reasoning?
What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?
Because I can solve problems that would take the age of the universe to brute force, without waiting the age of the universe. So can you: start counting at 1, increment the counter up to 10^8000, then print the counter value.
Prolog: 1, 2, 3, 4, 5 ...
You and me instantly: 10^8000
The brain can still use other means of working in addition to brute forcing solutions. For example, how would you go about solving the chess puzzle of eight queens that doesn't involve going through the potential positions and then filtering out the options that don't match the criteria for the solution?
Prolog can also evaluate mathematical expressions directly as well.
There's a whole lot of undecidable (or effectively undecidable) edge cases that can be adequately covered. As a matter of fact, Decidability Logic is compatible with Prolog.
Can you try calculating 101 * 70 in your head?
I think therefore I am calculator?
I can absolutely try this. Doesn't mean i'll solve it. If i solve it there's no guarantee i'll be correct. Math gets way harder when i don't have a legitimate need to do it. This falls in the "no legit need" so my mind went right to "100 * 70, good enough."
Or you could do (100 + 1)*70 => 100*70 + 1*70
Very easy to solve, just like it is easy to solve many other ones once you know the tricks.
I recommend this book: https://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Ca...
Completely missing the point on purpose?
Elaborate.
you don’t solve it by brute forcing possible solutions until one sticks
Yeah, read it in another comment. Why do you think doing calculations in your head is brute-forcing? Many people can do it flawlessly, without even knowing of these "tricks". They just know. Is that brute-force?
[flagged]
You are not replying to what I said. I am not going to repeat myself, see the parent comment you replied to.
Your original comment completely missed the point of what it was replying to, you phrased it like you were correcting them but you were actually in agreement and didn't seem to realize it. They tried to clarify when you asked and when you responded you assumed they had the opposite viewpoint from what they actually have.
I replied to "Can you try calculating 101 * 70 in your head?".
Yes, many people can successfully calculate it through learned tricks or no learned tricks.
That is all I am saying. And my question is: how is calculating it in your own head brute-force, especially if done without such tricks?
No need to complicate it, answers to my question would suffice.
> how is calculating it in your own head brute-force
Doing this calculation in your head is not brute force, which was their entire point. Their math question was an example of why the brain isn't brute-forcing solutions, replying to this:
> What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?
Um, that's really easy to do in your head, there's no carrying or anything? 7,070
7 * 101 = 707 * 10 = 7,070
And computers don't brute-force multiplication either, so I'm not sure how this is relevant to the comment above?
I think it is very relevant, because no brute-forcing is involved in this solution.
That's not true, the 'brute force' part is searching for a shortcut that works.
The brute force got reduced down to fast heuristics, like Arthur Benjamin's Mathemagics.
It’s almost like you’re proving the point of his reply…