- Henry Baker: Critique of DIN Kernel Lisp
- Henry Baker: Object Identity and Immutability
- Henry Baker: Linear Objects
- MIT: Dynamic Languages Seminar
- MIT/Jonathan Bachrach: GOO - Generic Object Orientator
- Some committee: "Modernizing Common Lisp: Recommended Extensions", July 7, 1999
- Some committee: "DRAFT Minutes of the NCITS/J13 meeting on 10/14/99 at SRI"
- Richard Gabriel: The Design of Parallel Programming Languages, 1991
- Paul Graham: Five Questions about Language Design
- Paul Graham: on Being Popular
- Paul Graham: Arc, a new dialect of Lisp - arc
- EuLisp: EuLisp, EuLisp Definition, Latest EuLisp implementation (Youtoo), Latest EuLisp Definition PDF HTML
- Apply: (Defunct) German project for a modern Lisp: Apply
Lyn Headley's Wish lisp
- fully object oriented. I think all functions should be methods and I should be able to specialize elt, length, etc for my own types.
- concurrent and distributed
Fare Rideau's remarks:
- Read about the history and spirit of lisp and learn from it. (Kent Pitman can tell you a lot about its spirit). If it's the way it is, that's for a reason, for better or worse.
- A new lisp, in many ways, would not be CL. Indeed, the spirit and letter of CL are adapted to some assumptions about the universe, and these assumptions mightn't be the same as the designers of a new lisp would like.
- For instance, something very important to me is concurrency and distribution. However, in a world where every other thread can (setf (symbol-function foo) ...), it is no longer possible to make type-based cross-function optimizations without breaking the semantics of CL. Distributed programming within a same CL universe becomes impossible. For a concurrent dynamic language that solves this issue in a clean way, see Erlang.
- Another problem with concurrency is that when you define domain-specific languages, you want thread interruption and interaction between concurrent threads to preserve domain-specific invariants, and that this is basically impossible without appropriate support in both the compiler and the runtime support. Can you please elaborate on this point?
Alexander Kjeldaas' Remarks:
I take exception to the notion that you can not do type-based cross-function optimizations without breaking the semantics of CL. This is entirely possible, but current CL implementations do not do it yet. Follow this simple strategy:
- Compile optimized functions with knowledge of types as much as you want.
- Make sure you compile an unoptimized version of every optimized function.
- Remember what type-information the optimized version of the function depends on.
- Attach a trigger that makes sure that any function that depends on type information that does not hold any more are disabled when (setf (symbol-function foo) ...) is called. Start using the unoptimized version instead.
Fare Rideau's remarks:
- Close, but no cigar:
- When you invalidate one function, other functions that depended on its invariants must be invalidated, and so on. In a distributed system, this can be very expensive: all the many "global" and "special" bindings in CL are as many points of synchronization nightmare. The semantics of CL has to be significantly modified, so as to accomodate for distributed computing.
- Your invalidation business, while a nice idea, doesn't concern only future invocations of the invalidated functions, but also past invocations that are still on stack. This means that you can't change the shape of the call graph or the internal data representation, based on the fear that you'll have to fallback to the unoptimized version. So as to be able to do any non-local structural optimization, or will have to restrict yourself to only "reversible" optimizations, and dynamically reconvert everything back to unoptimized versions, continuations included, when you hit an invalidation point. While theoretically possible, this would lead to horrible complexity and horrible performance in a distributed system (global rewrite of system state).
- In CL, you might already allow optimization to occur by declaring functions as inline, etc. But then you lose any well-defined semantics in presence of dynamic redefinition, etc. Erlang for instance, has clear semantics for function redefinition, that do not require expensive global synchronization.
- You can of course build a distributed on top of CL - this has been done. But you can likewise build distributed systems on top of C or of any Turing tar-pit. The thing is, the distributed system is not CL.
Alexander Kjeldaas' Remarks:
Changing the internal data representation is possible. Performance does not enter into the equation (almost). This kind of optimization is opportunistic - you gamble. You assume that you will win more than you will lose. So you lose big when a struct is changed. Too bad.
Stop the circus. Rewrite the objects. Update pointers. Write-protect the old objects. Forward writes from old objects to new objects. Optionally, use type-stable storage for optimized objects so you don't have to traverse everything.
I do not understand what you refer to when you say that you can not change the call graph.
I am not saying CL is good at distributed programming. What I am saying is you can build a CL system with "pay-as-you-go" properties. The presence of threads makes it harder but not impossible. Everything self, goo and others have done is possible with CL.
Fare Rideau's remark summary:
- You (Alexander Kjeldaas): Performance is an exercise in caching
- Me: cache invalidation costs an extreme awful blazing catastrophic lot in a global distributed environment, and its frequency is intrinsically unpredictable in a fully dynamic language with the illusion of a uniform global store
Jochen Schmidt's Remarks:
- Another problem with distribution is the need for some kind of security framework like e. g. a "Sandbox" or something similar.
- A new Lisp should IMHO include highlevel (and lowlevel) networking. If the standard is layered this could be made one of the upper layers, but I think supporting this is important nowadays
- OOP should keep multiple inheritance or at least provide explicit mixins like Ruby or "Categories" like Objective-C. I really do not want to see another crippled OOP system like Java.
- A MOP should be part of the standard.
Add (not by Jochen Schmidt):
- EuLisp, a nice attempt at a modern and efficient Lisp.
- Avoid ISLisp. [Why?]
- Dylan, a static language (!) derived from Common Lisp, Scheme and EuLisp. The first version had prefix syntax. The original Dylan book gives insights into its design.
- See: Lisp Evolution and Standardization, Frontiers in Artificial Intelligence and Applications, 1988
- Lessons can also be learned from NewtonScript (Apple) and its libraries. It is amazing in how little memory it runs advanced object-oriented code.
- Replace LOOP by ITERATE and FORMAT by OUT. Add lispy regular expressions like in SCSH.
- Maybe the language Crush would be interesting?
Comment from Valery Khamenya:
following nice languages could be considered:
- Clean - with parallelism and a very nice premise towards Virtual Machine based on graph rewriting.
- Mozart -- with accent on parallelism and distribution
Comment from gambarimasu who is at gmail:
what would please me:
- start with cl
- scriptify, scriptify, scriptify -- make calling it much more convenient than python, ruby, etc. you should not have to use a non-lisp language. (for now, we can use emacs -batch with (require 'cl).)
- orthogonalize and clean up various small warts everybody knows about
- deprecate lots of stuff almost nobody uses -- they can be in libraries
- make series operators be named consistently with sequence operators. or even incorporate them into the language to make cl more functional, somehow. perhaps have sequence functions accept a :series keyword, somehow.
- maybe let sequence functions accept a :destructive keyword.
- other langs let you use non-congruent methods, right?
- type/class and function/method orthogonalization
- figure out whether and how to standardize, semi-standardize, apply a real-valued metric of degree of standardness, or whatever, and get the dealbreaker code into the language in some reasonable standardish way.
- the code you want is in some other language's comprehensive archive network! get uffi or whatever *really* convenient to use so you can *easily* run python, c, ruby, or whatever libraries from lisp (efficiency is a far lower priority than convenience) -- without even having to know how to program in those languages!
Comment from marijnh:
- Just one wee little thing: A boolean false that is not an empty list. This makes interfacing with notations that do separate those concepts (JSON, SQL) a whole lot more pleasant.
Comments on a parallel / distributed Lisp, from MarkHoemmen:
- A lot of expertise in parallel programming comes from the scientific computing world, in which parallelism is typically exploited as SPMD, and the computing resources ("number of nodes") is usually fixed for the duration of an application. This has influenced the design of typical scientific computing programming libraries, such as MPI and GASNet. These can support threads as an additional layer, but threads do not cross nodes. (MPI 2 supports dynamic creation and destruction of processes, but I don't think so many people use that feature -- correct me if I'm wrong!)
- In that community, threading is usually either done semi-automatically (via OpenMP and a parallelizing compiler) or by hand (pthreads and the like). There are extensions of OpenMP (such as Intel's) that support the notion of a "task queue," rather than the simplistic SPMD model that most OpenMP parallization assumes.
- Scientific computing has developed a number of parallel programming languages. Some follow the SIMD school, such as UPC and Titanium, and others the threading school, such as Cilk. Parallel languages in development, such as Fortress, X10 and Chapel, have varying models. Of these three, I think X10 is the closest to releasing an implementation, and Fortress is the most ambitious (and perhaps the most Lisp-inspired?).
- CPU manufacturers are coming to the scientific computing community to learn how to exploit parallelism on the new multicore chips.
All this means that we Lispers could benefit from paying attention to how that community is designing languages and parallel programming libraries.
Wish list from Eric Normand:
- Do a clean sweep on the standard functions with four things in mind:
- Remove redundant functions. (elt vs nth, et al)
- Standardize naming of functions (and their arguments) (set-difference vs intersection).
- Rename functions to the currently accepted name (fold instead of reduce)
- Organize functions into "modules" so you can find what you want more quickly (:cl-sequence, :cl-math, etc).
- Clean up the sequence/list distinction (is it really necessary?)
- More functional programming facilities (like compose, curry, etc)
- Standardized bytecode representation (cross-platform)
- Decent free Windows implementation
- Standard package system and repository
- removed one because I think it's silly, now
- Continuations (if nothing else but for web/distributed programming, and they don't need to work everywhere)
- Fully transparent Persistent Object Database
- Generic functions dispatch on type AND number of arguments
- MOP part of standard (we need more introspection!)
- Good FFI to the most common programming languages
- Standard networking model
- Everything is a CLOS object, and all functions are generics (so you can "override" cons, cdr, car). I believe this was part of the original CLOS standard, but I don't think it has been kept. You could say (ensure-generic cons) or something like that, and it would convert a function to a generic.
It would be really nice if the new Lisp could run seamlessly on multiple architectures, initially via virtual-machine (if necessary), but natively as well. It would also be nice if the various libraries for this new Lisp (except for hardware-specific ones) would be available for every architecture.
One thing I like about Python (and friends) is that I can expect it to work on Linux, Windows, and Mac OS X; even though I focus on Linux, there are times in which the script I developed in Linux needs to work in Windows. With few exceptions, I can expect both the core language and the libraries I used to be available for my script, regardless of platform.
The closest I see for Common Lisp with regards to this is CLISP. Right now, I'm interested in parallel programming (it's a bit of a pain to use multiple processes in Python, using custom message-passing), but the only library I can find that will likely meet my needs, CL-MPI, isn't tested in CLISP. While it's tested in SBCL, SBCL isn't (or at least, doesn't seem to be) available for Windows!
Of course, if CLISP is only available via CygWin, I wouldn't consider that "seamless": I want to be able to install the language on a machine without other special installs, and just have it work.
- Improve lisp reader API to avoid ugly hacks like CLPython's setup-omnivore-readmacro function.
- Improve stream API to have an option to unread a string (not only a char).
- eval should have a second optional argument - the environment object.
- Extend the tilde slash directive in the formatting string with an option of not consuming an argument.