IRC Archive for channel #xwiki
Last modified by Vincent Massol on 2012/10/18 19:12
florinciu joined #xwiki at 00:01
mflorea left at 00:09 (Quit: Leaving.
florinciu left at 00:52 (Read error: Connection reset by peer
tsziklay - (01:11): sdumitriu: first things first, your name is hard to spell :) second, xwiki supports macros, specifically for me the Python macro. Could I basically do something like "if link == click, run pythonMacroCode{{my code here to execute bash script}}
tsziklay - (01:11): " ?
abusenius left at 02:31 (Ping timeout: 240 seconds
tsziklay - (02:42): sdumitriu ^
tsziklay left at 02:47 (Quit: ChatZilla 0.9.86 [Firefox 3.6.8/20100722155716]
boscop_ joined #xwiki at 03:53
boscop left at 03:56 (Ping timeout: 265 seconds
MartinCleaver left at 04:00 (Quit: MartinCleaver
venkatesh joined #xwiki at 04:39
venkatna joined #xwiki at 04:44
venkatesh left at 04:46 (Ping timeout: 240 seconds
sdumitriu left at 05:48 (Ping timeout: 276 seconds
venkatna left at 07:16 (Quit: Leaving
venkatesh joined #xwiki at 07:21
asrfel joined #xwiki at 08:54
vmassol joined #xwiki at 09:01
vmassol left at 09:03 (Client Quit
mflorea joined #xwiki at 09:21
sylviarusu joined #xwiki at 09:28
sburjan joined #xwiki at 09:32
sylviarusu left at 09:33 (Client Quit
sylviarusu joined #xwiki at 09:36
tmortagne joined #xwiki at 09:40
jvdrean joined #xwiki at 09:59
Enygma` joined #xwiki at 10:04
sburjan left at 10:10 (Read error: Connection reset by peer
sburjan joined #xwiki at 10:11
sdumitriu joined #xwiki at 10:13
sdumitriu left at 10:14 (Client Quit
sdumitriu joined #xwiki at 10:14
abusenius joined #xwiki at 10:16
sylviarusu left at 10:18 (Quit: Leaving.
sylviarusu joined #xwiki at 10:22
cjdelisle - (10:25): Hewlett-Packard owns 33,554,432 Class A IPv4 addresses, or 1/128 of the IPv4 space -- apparently more than the countries of India and China combined
cjdelisle - (10:26): ( http://mark.koli.ch/2009/07/ipv4-doomsday-were-running-out-of-ipv4-addresses-ipv4-address-exhaustion.html )
sdumitriu - (10:27): Yeah, I've heard the IPv4 doosday warning for several years now, and it didn't happen
cjdelisle - (10:27): it's more like the ipv4 address market day.
cjdelisle - (10:29): What is interesting is that the problem is not too many devices, it is a problem of people claiming everything in sight. Thus ip6 won't solve anything (even if it did get off the ground).
sburjan - (10:31): it's said this year in september will get exhausted
sburjan - (10:32): well .. don't think just at computers. Phones have IP addresses too
sburjan - (10:33): and they want to "eliminate" NAT from this world :)
cjdelisle - (10:34): Problem is switching to ip6 just means people will rush to claim all the address space in a bigger block. The problem is human nature.
sburjan - (10:35): I don't think the same mistake will be made again as with IPV4
cjdelisle - (10:35): You know about the provider independent swamp?
sburjan - (10:35): the class A was too huge
sburjan - (10:35): nope
sburjan - (10:36): and now smaller companies that have IP addresses bought and wanted more, they hot IP addresses from other subnets ...
KermitTheFragger joined #xwiki at 10:36
sburjan - (10:36): and the level of IPv4 address framentation is huge
cjdelisle - (10:36): Early on they just handed out 256 address blocks to anyone who wanted them. You could take these blocks to anywhere in the world.
cjdelisle - (10:36): Yes, fragmentation.
sburjan - (10:37): and everybody would like continous subnets of ip's
cjdelisle - (10:37): If you download the global routing table and grep /24 you find there are a huge number of tiny little nets which have to be in every router because they are provider independent.
sburjan - (10:38): for example the university from Iasi waited almost 2 years so it can get continous IPaddresses .. switched with other owners in order to get a continous block
cjdelisle - (10:38): yea, arin/interNIC ripped off everyone
cjdelisle - (10:39): anyway, in 1998, 80% of the global routing table was these little tiny class C networks from "the swamp"
cjdelisle - (10:39): *88%
sburjan - (10:39): don't know exactly what the swamp is :)
sburjan - (10:40): but most of us are class c :)
cjdelisle - (10:40): Imagine having a class c and you could get it routed to anywhere in the world.
sburjan - (10:40): only big companies have bigger classes .. which exhausted long ago
cjdelisle - (10:41): hah, in the USA they won't even sell anything less than a /20
sburjan - (10:42): .. /20 is pretty huge
sburjan - (10:43): 4096 hosts
cjdelisle - (10:43): Anyway the class C's (/24's) in Europe are ok because the global routers just send all packets in the ripe /8 blocks to Europe, then the European backbones sort them out.
cjdelisle - (10:44): There are a bunch of /24's which are not in a European block or really in any block.
sburjan - (10:44): reserved ?
sburjan - (10:44): or not sold yet :)
cjdelisle - (10:45): No they were assigned before there was any arin/ripe/apnic.
sburjan - (10:45): anyway .. the whole world is on CIDR now
sburjan - (10:45): classes are obsolete now
cjdelisle - (10:45): Each of those /24's is in every global router in the world. The core operators hate them because they eat up ram in all the core routers.
cjdelisle - (10:46): Yea, these are the addresses from before CIDR.
sburjan - (10:46): I didn't get the period with class domains .. I was too little :)
cjdelisle - (10:46): https://encrypted.google.com/search?q=192%2F8+swamp
sburjan - (10:46): I discovered the internet in the CIDR era :))
cjdelisle - (10:47): Meh, I'm just learning about it now.
sburjan - (10:47): me too :)
sburjan - (10:47): CISCO classes :)
cjdelisle - (10:48): reading nanog and arin mailing lists and papers.
sburjan - (10:48): how can you read a mailing list without subscribing ?
cjdelisle - (10:48): google
cjdelisle - (10:49): Anyway everyone wants to own these swamp addresses because they are yours and you can have the routed to isp's anywhere in the world.
sburjan - (10:50): 192.8 don't quite understand this notation ?
sburjan - (10:51): 192/8
sburjan - (10:51): this would mean a subnet only with the first 8 bits only ?
sburjan - (10:51): 255.0.0.0 ?
vmassol joined #xwiki at 10:51
cjdelisle - (10:51): 192/8 == 192.*.*.*
cjdelisle - (10:51): yea first 8 bits.
sburjan - (10:52): intresting
sburjan - (10:52): anyway the 192.168.*.* is reserved as internal. That segment isn't gonna get routed
cjdelisle - (10:53): yea, but like 192.24.31.0/24 is worth it's weight in gold.
sburjan - (10:54): yeah,
cjdelisle - (10:55): so in 1999 the swamp accounted for 88% of the internet routing table. it was mostly in 192/8 but some in 193-200 and 205-210.
sburjan - (10:56): anyway .. the switch to IpV6 will take time
sburjan - (10:56): even if most systems already support ipv6 already
sburjan - (10:56): If I'd have a IpV4 add, i wouldn't switch to v6
cjdelisle - (10:56): * it accounted for 49393 prefixes (mostly /24's)
cjdelisle - (10:57): as of 2006, the swamp accounted for 161,287 prefixes.
cjdelisle - (10:58): (still 88% of the global routing table)
cjdelisle - (10:59): no longer confined to 192, it has spread across all of the prefixes.
cjdelisle - (10:59): The problem is people want ip address space which is provider independent.
cjdelisle - (11:00): When they have provider independent address space, then they take it to another provider and the former block is nolonger continuous.
cjdelisle - (11:01): And thus all address space is degenerating into a swamp.
cjdelisle - (11:02): The end point is that every router must have a rule for every address.
cjdelisle - (11:03): There is some drama in the arin lists because they are handing out provider independent blocks in the ip6 space.
sburjan - (11:04): yeah..
sburjan - (11:05): but not all of the ISP's support independent address
sburjan - (11:05): the usually have a pool .. and assign ip's to clients from that pool (I'm sure you already know that)
vmassol left at 11:05 (Quit: Leaving.
sburjan - (11:06): I didn't knew that there are end-point ISP's that allow you to use your bought IP address
cjdelisle - (11:06): yup, you can see "pool" in my reverse lookup. But for websites, there are huge advantages to owning your ip addresses.
sburjan - (11:06): ofc
sburjan - (11:07): anyway.. I'm not worried about this topic :)
sburjan - (11:07): I don't own nothing :))
sburjan - (11:07): only one domain
cjdelisle - (11:07): You can multihome a website just by getting 2 (or more) isps and announcing the same address block from each.
sburjan - (11:08): multihome like multiple hosts hosting the same site ? .. for redundancy ?
florinciu joined #xwiki at 11:08
cjdelisle - (11:08): Redundancy, DDoS mitigation, etc.
cjdelisle - (11:09): The problems with ipv4 are that people "land rush" for as many addresses as possible then don't use them, and that everyone wants provider independent space and do all address space deaggrigates until the internet is unroutable.
sburjan - (11:10): yeah ..
cjdelisle - (11:10): Neither of these problems are fixed by ipv6 and the second problem is made worse because the swamp will be bigger.
sburjan - (11:10): we'll need powerful routers full of routes
boscop__ joined #xwiki at 11:10
sburjan - (11:11): anyways there are dynamic routing protocols
cjdelisle - (11:11): Imagine a router that has to route to every /24 block. 16,777,216 entries in the routing table.
sburjan - (11:11): yes but when you have so many routes .. i guess you use a dynamic protocol
sburjan - (11:12): no one in the sane mind will write 16.777.216 static router in the routing table
sburjan - (11:12): this is madness
boscop_ left at 11:12 (Read error: Connection reset by peer
cjdelisle - (11:12): No they would all be advertised through bgp so the router would just find all 17 million routes.
sburjan - (11:13): yeah..
cjdelisle - (11:13): If I was working on the core, I would be looking at ways to allow people to pick their own ip addresses like a global pool.
vmassol joined #xwiki at 11:13
cjdelisle - (11:14): aka rewrite bgp.
cjdelisle - (11:14): Instead I'll watch as the delicious drama unfolds.
sburjan - (11:15): don't worry
sburjan - (11:16): there are really good networking people out there
sburjan - (11:16): they won't allow the internet to fall apart
sburjan - (11:17): maybe they will impose some drastic restrictions
cjdelisle - (11:17): what? me? worry?
sburjan - (11:17): :)
cjdelisle - (11:17): http://198.108.95.21/meetings/nanog37/presentations/philip-smith.pdf <-- Great presentation (where I got most of my statistics)
sburjan - (11:18): cisco :)
sburjan - (11:18): but it's retty old .. 2006
cjdelisle - (11:18): still really good.
cjdelisle - (11:18): So ipv6 _and_ dnssec both fail :)
sburjan - (11:19): well .. add as it may sound .. I'm a big fan of NAT
sburjan - (11:20): but NAT has a lot of haters because in a large subnet, you need really expensive equipment to NAT.
cjdelisle - (11:20): I love NAT, awesome security.
sburjan - (11:20): special boards
sburjan - (11:20): if you start NAT on a regular CISCO router in a bigsubnet .. the router will freeze instantly
sburjan - (11:20): in less than a second
cjdelisle - (11:21): oh because it runs out of ram trying to track all connections?
sburjan - (11:21): but yeah, filtering content, security,etc .. .. +1 to NAT
sburjan - (11:21): yes
sburjan - (11:21): the guys from my university told me
sburjan - (11:21): they tried to nat around 10000 computers.. the router froze instantly
cjdelisle - (11:22): Yea and eventually you run out of port numbers.
sburjan - (11:22): they had to buy a special board for NAT.. which was expensive
sburjan - (11:22): yes ..
sburjan - (11:22): but let;s get serious .. if you have a ip address block, and know how to subnet it .. almost all your problems are solved
cjdelisle - (11:23): I think it'd be cool to start a business doing anonymous proxying using a nat which assigns you an ip per connection.
sburjan - (11:23): create a subclass for server, subclass for workstations, subclass for manager, or whatever. each of them with their restrictions
cjdelisle - (11:24): like assign the user a session number when they log on and hash that session number with the ip of the site they are going to and mod that against the number of addresses in the pool to get the address to assign.
cjdelisle - (11:25): That way every time I go to some site, I still show up with the same ip, but when I go to a different site, different ip.
sburjan - (11:25): hmmm
sburjan - (11:25): you'd need a big pool
cjdelisle - (11:26): meh, a /24 should do it.
sburjan - (11:26): so your idea is to sell proxy services ?
cjdelisle - (11:27): you wouldn't be *guarenteed to have a different ip, but it would be near impossible to connect your ip from sending mail to your ip from browsing google etc.
sburjan - (11:27): and what would be t he advantage ?
cjdelisle - (11:27): "sell proxy services" suddenly it sounds like a horrible idea, like who's going to be buying that? and what for? oh... never mind
sburjan - (11:28): and if you don;t do it over a VPN .. or your provider doesnt provide this service .. it's useles ..
sburjan - (11:29): logged connections everywhere
sburjan - (11:29): traceable
cjdelisle - (11:29): It's just if you don't want people to be building a dosser on you from your internet searches etc.
cjdelisle - (11:29): In other news, the new javadoc is built: http://hudson.xwiki.org/job/xwiki-platform-core-site-job/site/xwiki-core/apidocs/index.html?com/xpn/xwiki/api/package-summary.html
cjdelisle - (11:30): 3 hours to run.
vmassol left at 11:30 (Quit: Leaving.
sburjan - (11:30): are't we gonna lose some daythe com.xpn prefix ?
cjdelisle - (11:31): well, all new stuff is org.xwiki but I don't see the old core going anywhere any time soon.
sburjan - (11:31): slow migration, don't know
cjdelisle - (11:32): Nobody's working on a replacement for the database driver.
sburjan - (11:32): the link you gave.. it's still building ? cos the link don't work
sburjan - (11:33): the com.xpn's are only for database driver ?
cjdelisle - (11:33): Hudson's slow, try the link again.
cjdelisle - (11:33): com.xpn is everything in the old core which is a lot of stuff.
sburjan - (11:33): well basically the core is com.xpn ? because I don't see any org.xwiki there
sburjan - (11:34): I cna see :)
sburjan - (11:34): *can
sburjan - (11:34): I see a lot of methods from the core aren't documented
cjdelisle - (11:35): Yea.
cjdelisle - (11:36): It's hard to just add documentation being unsure of how the code works, maybe you miss an idiosyncrasy of the function which proves important.
sburjan - (11:36): yeah..but the doc should have been written when the actual code was written, right ?
cjdelisle - (11:37): yup.
cjdelisle - (11:38): hmm, hudson builds are looking pretty good. Everything is building and most stuff has no failed tests.
sburjan - (11:40): agent 1 or agent 2 ?
cjdelisle - (11:41): ?
sburjan - (11:41): we have 2 hudson server
sburjan - (11:41): s
sburjan - (11:41): actually 3, but one of them is offline
cjdelisle - (11:42): not really, there's only one server, agent 1 and 2 are just slaves working for the server.
cjdelisle - (11:42): I'm looking at http://hudson.xwiki.org/
sburjan - (11:42): me too
sburjan - (11:42): on the left side
cjdelisle - (11:42): Only problem is portlet and I know nothing about portlets.
sburjan - (11:43): so agent 1,2,3 are only slaves, and there is a bigger master server somewhere ?
cjdelisle - (11:45): the master is the computer which hosts myxwiki.org
sburjan - (11:45): and that doesn;t build anything ? it's only master as a centralized info source ?
sburjan - (11:45): so he's doing the building too
cjdelisle - (11:47): I think the master does not build anything (too much work while also hosting myxwiki, maven.xwiki.org and hudson.xwiki.org)
sburjan - (11:52): I'm seeing some redundancies in the tests
sburjan - (11:52): same test written twice, in diffrent projects, using diffrent versiosn of Selenium
sburjan - (11:53): when the people get back from vacation, I'll call for a tests meeting
cjdelisle - (11:53): Does anyone know how I should go about getting the hudson site job to push it's results to maven.xwiki.org? I'm assuming I want to put something into the "Deploy artifacts to Maven repository" box. Is there a password or something for maven?
abusenius - (11:55): hi everyone
sburjan - (11:55): cjdelisle, : can you help me and see if you can find ui-tests on the hudson ? i can;t find them, when try run last time, etc
sburjan - (11:55): hi abusenius
abusenius - (11:55): cjdelisle, seems that X509CryptoService is generating invalid certs
cjdelisle - (11:56): argh.
abusenius - (11:56): the web ID must be absolute
abusenius - (11:56): otherwise there is some exception on deserialization
cjdelisle - (11:56): ahh, I thought I had that set to give the external url.
abusenius - (11:56): getUserDocURL returns relative user url
cjdelisle - (11:57): should be using getDocument("user document name").getExternalUrl()
abusenius - (11:58): well, the problem is, it comes from the bridge
abusenius - (11:58): and it doesn't return XWikiDocument, but some interface
cjdelisle - (11:59): sburjan: http://hudson.xwiki.org/job/xwiki-product-enterprise-tests/com.xpn.xwiki.products$xwiki-enterprise-test-ui/
abusenius - (11:59): I can't find a way to get external url...
abusenius - (12:00): ah, I guess its possible to cast DocumentModelBridge to xwikidoc
cjdelisle - (12:00): no, then you need to depend xwiki-core
abusenius - (12:00): but then we'll have dependency on core
abusenius - (12:01): maybe there is some FancyAndAbsolutelyNotIntuitiveEntitySerializer or something?
cjdelisle - (12:01): xwiki-core will probably end up depending on xwiki-crypto and then maven says "enough of this rat's nest!"
cjdelisle - (12:01): lol
cjdelisle - (12:03): btw: see the new api javadoc?
sdumitriu - (12:03): You could use commons-beanutils or something like that
sdumitriu - (12:03): But that's a bit of overkill
abusenius - (12:03): extend document access bridge?
sburjan - (12:03): thanks Caleb
abusenius - (12:04): re javadoc, yes, cool
sdumitriu - (12:04): abusenius: There's a xwiki-url which should have the getExternalURL method
sdumitriu - (12:04): But it doesn't yet
abusenius - (12:05): :D
abusenius - (12:06): but there is getServerURL, which is already something
cjdelisle - (12:06): hmm, last I looked at xwiki-url it was too fragile to be any use for anything.
abusenius - (12:07): hm, or maybe not
abusenius - (12:08): last time I looked at it, it was completely useles
cjdelisle - (12:09): xwiki-url? When I tried to use it it blew up because I was using an ip address.
cjdelisle - (12:10): I'm going to go find something to eat, be back later.
abusenius - (12:10): I tried to use it for csrf-token, but it gives no access to the url parameters, so I gave up
abusenius - (12:11): *sigh*
sylviarusu left at 12:22 (Quit: Leaving.
sylviarusu joined #xwiki at 12:34
yiiip joined #xwiki at 12:40
yiiip - (12:40): hi. how can i change the textcolor in the xwiki source editor
mflorea - (12:42): yiiip: through CSS, you can define a stylesheet extension and put there the needed CSS rules to change the color
yiiip - (12:42): hmm no i dont want to change the editor. just some text in the wiki
yiiip - (12:42): there has to be something like [color=red] fooo [/color]
mflorea - (12:43): oh, ok, then you can write this:
mflorea - (12:43): some (% style="color:red" %)red(%%) text
yiiip - (12:43): ok, thanks
mflorea - (12:44): http://platform.xwiki.org/xwiki/bin/view/Main/XWikiSyntax#HParameters
abusenius left at 12:44 (Remote host closed the connection
nuvolari - (13:27): oh dear... is it possible to lock out everyone on xwiki by mistake?
nuvolari - (13:34): false alarm
nuvolari - (13:34): glassfish died :P
yiiip - (13:34): where can i edit how the various tags are displayed
jvdrean - (13:42): yiiip: /xwiki/bin/view/XWiki/TagCloud ?
florinciu1 joined #xwiki at 13:51
florinciu left at 13:54 (Ping timeout: 276 seconds
MartinCleaver joined #xwiki at 13:59
abusenius joined #xwiki at 14:00
cjdelisle - (14:03): abusenius: Should we just change the api to make the script provide the webid?
abusenius - (14:05): hm, we would need to pass it through in many cases
abusenius - (14:05): it is not a good solution imo
cjdelisle - (14:05): well, xwiki-crypto has a tree dependency structure so the number of changes is relatively small.
abusenius - (14:06): the users of that api would need to know what it is, provide something correct etc.
cjdelisle - (14:06): sure, but it's not a security problem because anyone can create a cert with anything on the server or off.
abusenius - (14:07): well, yes, it is a usability/design problem
cjdelisle - (14:07): Hmm, maybe would be a security issue if a "sign all new certs" authority was implemented.
abusenius - (14:07): that comes from the usability/design problem of xwiki api ^^
cjdelisle - (14:08): IMO the answer when there are dependencies it to provide all services then have the dependency rat's nest at the script level.
abusenius - (14:08): what is the web id url ured for?
abusenius - (14:08): *used
venkatesh left at 14:08 (Ping timeout: 240 seconds
cjdelisle - (14:08): FOAFSSL compatibility.
cjdelisle - (14:09): And maybe we will want to use it for permissions management.
abusenius - (14:09): well, and what is it used for there?
cjdelisle - (14:10): You go to a foafssl website, it reads the webid, loads the page at that address, parses the fingerprint, compares to the cert fingerprint, knows what website you belong to.
abusenius - (14:10): why not implement a component-friendly api to get the external ulr? it is obviuosly needed, not only for crypto
cjdelisle - (14:11): Making script provide everything is an attractive answer because we could drop all dependencies on model and bridge.
abusenius - (14:12): well, but it just makes *everything* that uses crypto be responsible for knowing internal things like foasfssl compatibility
abusenius - (14:13): for example, signed scripts couldn'T care less about that
cjdelisle - (14:13): no only everything which uses generate new cert (registration page)
abusenius - (14:13): but I agree that it is much easier than fixing xwiki-url
cjdelisle - (14:14): :)
cjdelisle - (14:14): I think xwiki-url and xwiki-action should be sandboxed.
abusenius - (14:14): wdym?
cjdelisle - (14:15): moved to contrib/sandbox until they work and we agree on them.
abusenius - (14:15): maybe
abusenius - (14:15): at least currently they are not really implemented, so nobody uses them
cjdelisle - (14:17): Yea, at best they are unused appendices, at worst they are suppressing development which might chose different directions.
cjdelisle - (14:17): s/chose/choose/
sburjan left at 14:17 (Ping timeout: 248 seconds
sylviarusu left at 14:19 (Read error: Connection reset by peer
abusenius - (14:19): ok, we can pass the webid url around, but what do we do with getCurrentUser()?
cjdelisle - (14:20): certsFromSpkac(final String spkacSerialization, final int daysOfValidity)
cjdelisle - (14:20): would become
abusenius - (14:20): it is good to have it, because it limits the user fro mgenerating certs for arbitrary people
cjdelisle - (14:21): certsFromSpkac(final String spkacSerialization, final int daysOfValidity, final String userDocumentURL, final String userDocumentReference)
abusenius - (14:21): userName?
cjdelisle - (14:21): userName == "JohnSmith"
abusenius - (14:21): fullUserName
cjdelisle - (14:22): userDocumentReference == "xwiki:XWiki.JohnSmith"
abusenius - (14:22): in any case, its not a eference
abusenius - (14:22): reference is DocumentReference or something
abusenius - (14:22): string is a name, path, maybe url, but not a reference
cjdelisle - (14:22): meh, until EntityReference is abandoned in favor of ObjectPointer ;)
sburjan joined #xwiki at 14:23
cjdelisle - (14:23): you Iasi guys all on wifi?
boscop__ is now known as boscop ([email protected]
abusenius - (14:25): seems that getUserDocURL is only used in crypto service for creating certs, so we can drop it
abusenius - (14:25): getUserName is also used in signedscripts
cjdelisle - (14:26): You are still using stuff from crypto.internel.*
cjdelisle - (14:26): ?
tmortagne left at 14:26 (Read error: Connection reset by peer
cjdelisle - (14:27): hmm. Why does crypto store the user name? I forgot.
abusenius - (14:27): yea, didn'T wanted to duplicate code
abusenius - (14:28): but if crypto will stop using it, we could move user doc utils to scripts
cjdelisle - (14:28): ^^ you are using internal "proprietary" api, I can change that tomorrow and it is not an api break
abusenius - (14:28): its still under heavy development :)
cjdelisle - (14:29): hmm, I think the user document url is better than the user name.
cjdelisle - (14:29): I forgot why I had the user DocumentReference (serialized as string) in the first place.
sburjan - (14:29): .
abusenius - (14:29): well, in principle name and url duplicates information
cjdelisle - (14:29): .
abusenius - (14:29): ,
abusenius - (14:30): we introduced the name to know which cert it is
abusenius - (14:31): and webid url is stored some sort of not very standard extension
cjdelisle - (14:32): uri is more specific, it is interwiki, we don't have to declare a dependency on model, it isn't tied to EntityReference. I like.
abusenius - (14:32): can we easily convert url to name?
tmortagne joined #xwiki at 14:32
cjdelisle - (14:32): why do you want to? just do a http get on that uri and parse the page.
cjdelisle - (14:33): we can put xml on the user's page with all the info you need.
abusenius - (14:33): if you implement a method in xwikicert to get the DocumentReference to the user doc fro mthat webid extension, I'm fine with dropping the name :)
cjdelisle - (14:33): still have a dependency on model then.
abusenius - (14:33): what? parsing a page? it the same xwiki
cjdelisle - (14:34): so?
abusenius - (14:34): it is *internal* stuff, why on earth should I use http get and a xml parser to get the freaking certificate stored in the db on the same machine?
abusenius - (14:34): from a component
cjdelisle - (14:35): what exactly do you need?
cjdelisle - (14:35): I mean you have the cert from the signature.
abusenius - (14:36): I want to make sure it is the correct one
abusenius - (14:36): i.e. I need to get the "real" cert by user name
cjdelisle - (14:36): k so you need to get a fingerprint from the user page.
abusenius - (14:36): I need to be sure this user is allowed to do what he is supposed to do
vmassol joined #xwiki at 14:37
abusenius - (14:38): no, I need to get the fingerprint from the DcumentReference and not some random URL hosted in south africa
abusenius - (14:38): *without* http get
cjdelisle - (14:38): South Aferica is cool.
abusenius - (14:38): *and* without any parsing, rpc, soap etc.
cjdelisle - (14:38): They have this lottery there and everybody wins.
abusenius - (14:38): yea, sure :)
cjdelisle - (14:39): What if the user's page has a signed note saying "I grant you authority to do this"?
abusenius - (14:40): if this page is on another computer in south africa, it doesn'T help much
cjdelisle - (14:40): why?
abusenius - (14:40): well, how do I get to the page of the person who sighned thaT?
abusenius - (14:41): *signed
cjdelisle - (14:41): well the signature contains a cert.
cjdelisle - (14:41): you read the cert, and that contains a url.
cjdelisle - (14:41): recurse :)
abusenius - (14:42): and beside thil, I'm -1 to anything that needs network to get a piece of data stored on the same pc in the db accessible from the same java vm
abusenius - (14:42): it is just ridiculous
abusenius - (14:42): yea, and the cert is also stored in south africa
abusenius - (14:43): the certs are self signed
vmassol left at 14:43 (Quit: Leaving.
cjdelisle - (14:43): well you have to continue recursion until you find a cert that you trust.
abusenius - (14:43): we cannot know if they are correct or not
yiiip - (14:43): when im listing all the documents containing a certain tag. how can i edit this view, for example write something behind every document title?
abusenius - (14:44): and how do I know which cert I trust if I cant get a cert of Admin stored in the *same* wiki
cjdelisle - (14:44): yiiip: it's one of the .vm files in /templates/ inside of the .war file. I think you're looking for docextras.vm?
cjdelisle - (14:44): That's why I would be for having one root cert stored in a config file.
cjdelisle - (14:45): that cert signs the admin cert and the admin cert can then sign anyone he wants.
yiiip - (14:45): cjdelisle: isnt this just for the document foort?
yiiip - (14:46): http://localhost:8080/xwiki/bin/view/Main/Tags?do=viewTag&tag=foo
abusenius - (14:46): don't you see how ridiculous it is to use network to access the data stored in RAM? its like talking via sattelite phone to a person sitting next to you
abusenius - (14:46): I'm also for having a root cert somewhere not in DB, but for a different reason
cjdelisle - (14:47): yiiip: You may be right, when I'm looking for something I usually pull up the source code and find an id="" for a tag near to what I want and then do a search for that id in the /templates folder.
cjdelisle - (14:47): *source code == html code.
flaviusolaru joined #xwiki at 14:47
cjdelisle - (14:48): abusenius: I don't know if I'd call loopback network access.
abusenius - (14:48): in comparision to a function call it is a post pigeon
cjdelisle - (14:49): how do you recommend getting the document?
abusenius - (14:49): like we do now
cjdelisle - (14:50): how is that? crypto doesn't get a document so I don't know.
abusenius - (14:50): it works, it does not depend on core
abusenius - (14:50): I mean doc reference
abusenius - (14:50): why do you need the doc?
cjdelisle - (14:50): I don't.
cjdelisle - (14:51): depending on bridge is not much better than depending on core, actually I would call it a hack.
abusenius - (14:51): there is no other choice
cjdelisle - (14:51): http :D
abusenius - (14:52): thats an ever worse hack
cjdelisle - (14:52): I don't think so.
abusenius - (14:52): it is
cjdelisle - (14:52): Proof?
abusenius - (14:52): it uses core internally (actually even more, the whole server), just like bridge
abusenius - (14:53): but instead of making a function call, you send a post pigeon, then scan the message and ocr it
cjdelisle - (14:53): It has a lot of advantages, core could be replaced, bridge could be removed, but if xwiki still works, http get still works.
cjdelisle - (14:53): It would work in a cluster.
abusenius - (14:54): lets decouple all components using http then
abusenius - (14:54): its soooo great
abusenius - (14:54): no dependencies any more
abusenius - (14:54): everything it totally flexible
abusenius - (14:55): you could even replace core by another wiki and nobody would notice
cjdelisle - (14:56): quite a bit simpler, you only provide one service for the internal and the world...
abusenius - (14:56): (that was irony, not a proposal ;) )
abusenius - (14:59): is there some FancyDocumentResolver that can convert external url -> DocumentReference
cjdelisle - (14:59): http://en.wikipedia.org/wiki/Terracotta_Cluster
cjdelisle - (14:59): there it is.
abusenius - (15:00): http://en.wikipedia.org/wiki/KISS_principle
cjdelisle - (15:01): XWikiUniformResourceIndicatorDocumentReferenceResolutionAndSerializationInterface ?
tmortagne - (15:02): abusenius: pretty sure xwiki-url is already used in core to find the DocumentReference from the URL
abusenius - (15:02): sounds good, very precise ^^
tmortagne - (15:02): see XWiki#getDocumentReferenceFromPath
abusenius - (15:02): does it also work on strange configurations?
tmortagne - (15:03): abusenius: depends what you mean
tmortagne - (15:03): all i know is that we are using it right now to parse all URL in standard
abusenius - (15:03): I mean, url rewriting, clusters, proxies etc.
abusenius - (15:04): I intent to use it PR handling, it should always work
cjdelisle - (15:04): +1 kiss principle, that's one reason why I like the http get and parse method. FOAFSSL is written, we don't have to trust the database, if we can't build a cert chain, then there is no trust.
cjdelisle - (15:05): * note: I'm not sold on http get myself, I just want to look at all aspects.
abusenius - (15:05): I find trusting the DB on *my* server is easier than trusting some other random site
cjdelisle - (15:06): I say only trust xwiki.properties.
abusenius - (15:06): you can't stop trusting the db, it contains all the data
cjdelisle - (15:07): sure you can, sign all data and verify on load.
abusenius - (15:07): either you sign really everything or you trust the db
abusenius - (15:07): and I can tell you, signing everything will not be implemented any time before version 4.0
cjdelisle - (15:08): Obviously the db can change a comment or a document, I'm talking about no trus for user permissions and PR.
abusenius - (15:08): which are stored in db
abusenius - (15:08): so whats the difference?
cjdelisle - (15:08): root cert is in a file, if the cert chain doesn't trace back to it, no trust.
abusenius - (15:08): (and just a side remark, xwiki will not be used by fbi, nsa or similar)
cjdelisle - (15:09): yet
cjdelisle - (15:09): ];)
cjdelisle - (15:09): meh? how did that ] get there?
abusenius - (15:09): thats an evil nsa trick
cjdelisle - (15:10): I know, just noticing it looks like horns.
abusenius - (15:10): :)
cjdelisle - (15:10): It's a conspiracy they put all the keys too close together on ,my keyboard.
abusenius - (15:11): I think we need to have a root cert to be able to export/import everything
cjdelisle - (15:11): hmm?
abusenius - (15:12): to be able to trust the content ofthe xar
abusenius - (15:13): if you export signed data and a self-signed cert, it is possible to exchange the cert, and nobody would notice
cjdelisle - (15:13): When you import the xar, the users who are imported would then be checked against the installed root cert.
abusenius - (15:13): yes, so we need to have something that is either built-in or not exported
cjdelisle - (15:14): Whoever imports the xar would have to have write on all the documents which is overwrites.
abusenius - (15:14): for things that we redistribute, built-in is better
abusenius - (15:14): well, yes
cjdelisle - (15:15): I would say ship a root cert but encourage users to change it.
abusenius - (15:15): I'd prefer ship root cert, and make client cert override it, if present
cjdelisle - (15:16): hmm. if we ship with an admin account, what will the webid be for the admin account?
abusenius - (15:16): it would be easier to go back to default
abusenius - (15:16): good question
cjdelisle - (15:17): So if we are going to do this, we have to also tackle the problem of bootstrapping a new wiki.
cjdelisle - (15:18): IMO it should make you register the admin account and then generate the root cert and let you download it and ask you to install it on the server.
abusenius - (15:18): there is no admin by default, only superadmin
abusenius - (15:18): admin is in the xar
abusenius - (15:19): maybe we should generate it on the first login as admin?
cjdelisle - (15:19): Yea, this would be a change. We would make the user register a root account just like linux does when you install.
abusenius - (15:19): and not ship admin in the xar
cjdelisle - (15:19): correct.
cjdelisle - (15:20): Well actually we could have an "admin like" user which is responsable for all xwiki dev team documents.
abusenius - (15:20): what if we allow not having webid for such cases?
cjdelisle - (15:21): how do you establish the trust chain?
abusenius - (15:21): we are forcing everyone to have a correct webid for "foafssl compatibility", most people don'T know what it is and dont care
abusenius - (15:22): trust chains do not depend on foafssl
cjdelisle - (15:22): I can gut the webid _and_ the user name but certs become less useful.
abusenius - (15:23): imo webid is only useful if you use this cert in foafssl
abusenius - (15:23): i.e. store it in browser, use it for logging in etc
cjdelisle - (15:24): foafssl is powerful, it has a lot of applications, I don't want to throw it out on a whim.
abusenius - (15:24): also an interesting question, what happens if you set up your wiki, and after a year decide to change the host?
cjdelisle - (15:24): aka change the uri.
abusenius - (15:24): I don't say we should throw it out, I just say it is not the most important thing
cjdelisle - (15:25): certs only last a year :D
cjdelisle - (15:25): win
abusenius - (15:25): well, after 6 months? :)
abusenius - (15:25): you'll have to change the whole chain
abusenius - (15:25): resign everything
cjdelisle - (15:26): Me? I would put an entry in my hosts file :)
abusenius - (15:26): well, you're not alone
abusenius - (15:27): how about other 157 users?
cjdelisle - (15:27): no on the server.
cjdelisle - (15:27): Oh, I'm still thinking http get :)
cjdelisle - (15:28): hosts file myOldWebAddress.com 127.0.0.1
abusenius - (15:28): http get is bad
abusenius - (15:28): mo we shouldn'T rely on url for such things in the hope that on the end there will be localhost
cjdelisle - (15:28): I agree it's slow but bad?
abusenius - (15:28): unreliable
cjdelisle - (15:29): what's more reliable than uri?
abusenius - (15:29): it might be on the other side of the planet
cjdelisle - (15:30): yea, if you change your dns address, you break everything, all the permalinks on the internet. Resigning is just a small part of the problem.
abusenius - (15:31): it also makes it very easy to redirect to another site to get the cert
cjdelisle - (15:31): that's a + right?
abusenius - (15:31): no
cjdelisle - (15:31): hm?
abusenius - (15:31): its a big -
abusenius - (15:31): think of malicious attackers, rouge certs etc
cjdelisle - (15:31): if rsa or sha1 get broken we're sunk.
abusenius - (15:32): instead of looking at the trusted db, you need to rely on certs stored elsewhere
cjdelisle - (15:32): Is that what you're talking about?
abusenius - (15:32): it xould be stolen
abusenius - (15:32): *could
cjdelisle - (15:32): so what if somebody wants to host my signatures? Better their bandwidth than mine.
abusenius - (15:33): it is yours *and* theirs bandwidth
abusenius - (15:33): remember, you wanted to do it recursively
abusenius - (15:34): so one request to your server will become 10 requests back and forth to south africa
cjdelisle - (15:34): there's sort of a DoS attack because they could send you on a wild goose chase across the internet trying to validate a cert but you can have a maximum hop count or something.
abusenius - (15:35): hosted on and 0wned windows with 2k/s modem connection
abusenius - (15:35): it is way too complicated and unreliable
cjdelisle - (15:36): have you read the rfc for TCP lately?
abusenius - (15:36): no :)
cjdelisle - (15:36): re complicated and unreliable ^^
abusenius - (15:37): exactly, don'T rely on it
cjdelisle - (15:37): it's the same way as https works.
abusenius - (15:39): ok, lets stop wasting our time
cjdelisle - (15:39): there does seem to be a problem though.
abusenius - (15:40): back to the original problem :) I'd prefer to implement getExternalURL somewhere, this would solve everything
cjdelisle - (15:41): If 20 people put permission objects on my page and 20 people put permission objects on each of their pages, you have an explosion of search directions you can take to try to resolve a cert :(
abusenius - (15:42): and if their pages are hosted elsewhere you can't even cache
cjdelisle - (15:42): Oddly enough it seems to be the same problem as trying to fix bgp.
abusenius - (15:43): anyway, seems that getExternalURL has to be in bridge, because the info is in the core
abusenius - (15:43): which is kind of bad
cjdelisle - (15:44): Ok, so you need the document reference (as string) right?
abusenius - (15:44): re what?
abusenius - (15:45): accessing cert?
cjdelisle - (15:45): you need xwiki:XWiki.JohnSmith to be in the cert?
abusenius - (15:45): well, it is easier to create a document reference from that
abusenius - (15:46): (the code is allready there)
cjdelisle - (15:46): If you don't need it then I'll remove it entirely.
abusenius - (15:46): what do you put to SubjectDN?
cjdelisle - (15:46): ""
abusenius - (15:46): and IssuerDN?
abusenius - (15:47): this is bad
cjdelisle - (15:47): it'll shorten the signatures some.
abusenius - (15:47): you will not be able to see who is it for
cjdelisle - (15:47): well you could just copy the webid in there but the signature gets longer.
abusenius - (15:47): SubjectDN is standard, the extension web id is using is not
yiiip left at 15:48 (Quit: Page closed
abusenius - (15:48): who cares about +50 byte
abusenius - (15:48): its about 1K allready
cjdelisle - (15:48): the signatures?
abusenius - (15:50): definitely
abusenius - (15:50): 4096 bit is 512 byte
abusenius - (15:50): even for 2048, you have signature, signature in cert, public key...
abusenius - (15:50): *4/3
cjdelisle - (15:50): browser generated cert = 2192 base64 chars.
cjdelisle - (15:50): 64 more chars = 1 more line of base64.
abusenius - (15:51): and? its like 5%
cjdelisle - (15:51): for duplicated text?
abusenius - (15:51): well, use the user name then :)
abusenius - (15:51): its short
cjdelisle - (15:52): blah, dependencies.
abusenius - (15:52): regex :)
cjdelisle - (15:52): hahaha
cjdelisle - (15:55): do you need the user name?
abusenius - (15:55): I need DocumentReference to user page and I need to see whos cert it is (n certificate manger in FF for example)
cjdelisle - (15:55): We could allow the script to specify a common name (foafssl did this).
abusenius - (15:55): and having a readable SubjectDN is more important than saving 50 byte
abusenius - (15:55): and allow to have dofferent name and url, great
abusenius - (15:55): *different
xwikibot joined #xwiki at 15:57
cjdelisle - (15:57): I was thinking xwikibot was hosted there, actually it was xwikibridge bot.
cjdelisle - (16:01): So I'm to understand that you will be needing the user document name.
mflorea left at 16:01 (Quit: Leaving.
abusenius - (16:01): yes
tmortagne1 joined #xwiki at 16:01
tmortagne left at 16:01 (Read error: Connection reset by peer
cjdelisle - (16:01): ouch, xwiki.org not doing well...
xwikibot joined #xwiki at 16:03
abusenius - (16:03): yea, would help for sure when the server is down, like now...
cjdelisle - (16:03): suddenly a cluster sounds nicer.
cjdelisle - (16:04): aka http get :)
abusenius - (16:04): noooooooooooo
cjdelisle - (16:04): :D
cjdelisle - (16:05): I know it's slow and I wish it was faster like dns or something.
abusenius - (16:06): as if dns is the fastest thing ever
cjdelisle - (16:06): it's pretty fast, you send a udp packet and the server sends one back.
abusenius - (16:07): it still takes dozens of milliseconds
sburjan left at 16:07 (Quit: Ex-Chat
cjdelisle - (16:07): dnssec is like you send a udp packet and it sends like 800.
abusenius - (16:07): fast is when it takes dozens of nanoseconds
cjdelisle - (16:08): so naturally you set the source port on your packet to ohhh twitter?
cjdelisle - (16:08): nanoseconds? java? lol
abusenius - (16:08): ok, at least microseconds on average ^^
cjdelisle - (16:09): I think it's pretty common to use caches which are on a different server.
cjdelisle - (16:09): aka network connection.
cjdelisle - (16:10): memcached
abusenius - (16:11): if the defferent server is in the other room as opposed to other continent it is a large improvement
abusenius - (16:12): but other room as opposed to another memory cell is not
abusenius - (16:12): (unless you use a huge numa system)
cjdelisle - (16:15): memcached uses tcp but the servers stay connected all the time.
cjdelisle - (16:15): udp is optional.
flaviusolaru left at 16:15 (Read error: Connection reset by peer
cjdelisle - (16:20): dozens of microseconds isn't really going to happen no matter what you do. Database loads?
cjdelisle - (16:21): Even if you get from cache, it has to clone the document, all the objects, the attachments etc.
cjdelisle - (16:25): getDocumentURL(DocumentReference documentReference, String action, String queryString, String anchor, boolean isFullURL);
cjdelisle - (16:25): ?
abusenius - (16:28): where is it?
cjdelisle - (16:31): proposed.
tmortagne1 left at 16:32 (Read error: Connection reset by peer
tmortagne joined #xwiki at 16:33
abusenius - (16:36): yes, something like this
cjdelisle - (16:39): I think what I need to do is change the hudson site build to say mvn clean site site:deploy correct?
cjdelisle - (16:40): (to get the maven.xwiki.org/site to be updated)
cjdelisle - (16:40): sdumitriu: tmortagne ? ^^
tmortagne - (16:42): cjdelisle: just mvn clean site:deploy i think
cjdelisle - (16:42): I tried mvn clean site:deploy locally and it said run site first.
cjdelisle - (16:42): I did mvn clean site site:deploy and it tried to connect via ssh so I figured it worked.
tmortagne - (16:42): ok, that's weird then
tmortagne - (16:43): but i don't know site plugin very well
cjdelisle - (16:43): Maven is documented really well :)
cjdelisle - (16:45): changed. and changed to be bound to agent2 since agent1 always seems to be busy.
cjdelisle - (16:46): and building.
abusenius - (16:46): great, there is a nice StandardXWikiURLFactory, which cannot work because HostResolver it uses is not implemented...
cjdelisle - (16:48): IMO committing stuff that is incomplete, doesn't work, isn't tested is wrong.
abusenius - (16:49): the test works, because HostResolver is mocked there :)
cjdelisle - (16:51): I don't like that style because you don't know that the HostResolver interface can possibly be implemented.
abusenius - (16:51): at least a "BIG PHAT WARNING: NOT IMPLEMENTED YET" would be cool
cjdelisle - (16:52): sandbox.
cjdelisle - (16:53): It would be nice to sandbox all nonfunctional code.
abusenius - (16:53): would be hard, some classes from that package are already used
cjdelisle - (16:53): well then they are functional.
cjdelisle - (16:56): I'm thinking about proposing adding "latest-release" and "second-latest-release" to svn which are externals pointing to the last release and release before last.
cjdelisle - (16:56): That way hudson need not be changed when a release happens.
cjdelisle - (16:56): not sure if it will save work or not though.
cjdelisle - (16:58): might just shift the work from hudson jobs to svn changes.
abusenius - (17:01): nice, just managed to convert external url to user name :)
abusenius - (17:01): it only took 8 lines and 3 new dependencies...
cjdelisle - (17:02): neat, can you trust it will be the same as the xwiki-core urlFactory?
abusenius - (17:04): probably not :)
abusenius - (17:05): I more or less copy-pasted XWiki#getDocumentReferenceFromPath, so it would fail too
cjdelisle - (17:07): I'm playing with maven versions plugin, it looks promising.
abusenius - (17:13): hm, seems that the url -> name conversion is not quite correct
MartinCleaver left at 17:14 (Quit: MartinCleaver
Enygma` left at 17:17 (Ping timeout: 240 seconds
tmortagne left at 17:28 (Read error: Connection reset by peer
tmortagne joined #xwiki at 17:29
lucaa joined #xwiki at 17:35
lucaa - (17:35): guys, it seems that platform does not build with a clean repo, due to commons-net:2.1 which is not found
lucaa - (17:36): oanat discovered on her machine and I tried too after deleting commons-net:2.1 and I got:
lucaa - (17:37): http://pastebin.com/176jQv6y
tmortagne - (17:39): indeed there is not 2.1 version on http://repo1.maven.org/maven2/commons-net/commons-net/
tmortagne - (17:40): looks like 2.1 has not been releases anyway...
tmortagne - (17:40): s/releases/released/
lucaa - (17:41): what is this? how did it endup in our deps then?
tmortagne - (17:43): sdumitriu: looks like you did the upgrade to commons-net in root pom.xml, any idea what it used to work ?
tmortagne - (17:44): s/what/why/
sdumitriu - (17:44): It appeared to be released, but then it was unreleased
tmortagne - (17:45): was this version very important for us or would be downgrade to 2.0 ?
tmortagne - (17:45): s/would be/could we/
sdumitriu - (17:45): Better downgrade
lucaa - (17:45): true, true I found other people on the web with the same pb
lucaa - (17:45): so it seems that at some point it was there and now it's not
tmortagne - (17:46): lucaa: it's not only maven issue, it says on commons-net website that the last version is 2.0
lucaa - (17:47): there are release changes though: http://commons.apache.org/net/changes-report.html#a2.1
tmortagne - (17:47): yep
asrfel left at 17:47 (Quit: Leaving.
tmortagne left at 17:51 (Ping timeout: 248 seconds
tmortagne joined #xwiki at 17:52
cjdelisle - (18:06): "This version was not released by Apache Commons and the project does not
cjdelisle - (18:06): know, what it actually contains."
cjdelisle - (18:07): a bit ominous
cjdelisle - (18:07): "Apache Commons PMC realized about two weeks ago that the mvn repo
cjdelisle - (18:07): contains artifacts for commons-net 2.1 which has never been released and
cjdelisle - (18:07): subsequently removed those from central"
abusenius - (18:14): nice
abusenius left at 18:31 (Ping timeout: 260 seconds
tsziklay joined #xwiki at 18:57
cjdelisle - (19:08): tsziklay: you had a question.
cjdelisle - (19:08): "xwiki supports macros, specifically for me the Python macro. Could I basically do something like "if link == click, run pythonMacroCode{{my code here to execute bash script}}"
tsziklay - (19:08): yes thats right
cjdelisle - (19:09): when the user clicks a link they load a page correct?
tsziklay - (19:09): right
cjdelisle - (19:09): So you could put something in the link like [[link to somewhere>>Some.Where?runScript=1]]
cjdelisle - (19:10): and at the page Some.Where, you put a python script like the following:
cjdelisle - (19:10): {{python}} if request.getParameter('runScript') == 1 : do something..... {{/python}}
cjdelisle - (19:11): that's pseudopython, I don't really know python.
cjdelisle - (19:13): Due to a bug in jython, you might have to begin your python macro with this snippet: http://code.xwiki.org/xwiki/bin/view/Snippets/AccessToBindingsInPythonSnippet
cjdelisle - (19:13): (that is in order to have the request object available to you.)
cjdelisle - (19:13): more information is here: http://platform.xwiki.org/xwiki/bin/view/DevGuide/Scripting#HPythonSpecificInformation
abusenius joined #xwiki at 19:14
cjdelisle - (19:15): you are a man of many ip addresses Alex.
abusenius - (19:17): lol
tsziklay - (19:19): I see, that sounds good cjdelisle. Is there any more documentation on xwiki so I know exactly what code I need to make a page?
cjdelisle - (19:20): You want to make a page programmatically? or you mean how to make a page manually?
KermitTheFragger left at 19:25 (Remote host closed the connection
tsziklay - (19:30): cjdelisle: I'm not sure, I just do not know how to make an xwiki page. my boss has indicated that he wants this "if click then run script" functionality and said xwiki is probably able to support it.
tsziklay - (19:31): cjdelisle: basically all I would need is a single page showing proof of concept for this; I do not need anything beyond say, a title, a URL, and a link that runs my python code.
cjdelisle - (19:32): Do you have to be able to view the page with the python code without running it?
tsziklay - (19:36): cdjelisle: that doesn't matter I think. my boss just wants a wiki page that any of our employees can have access too that will sort of run the script remotely. for now it doesn't need to be fleshed out beyond that.
cjdelisle - (19:36): so create a page and make the content this: {{python}} print 'hello world' {{/python}}
cjdelisle - (19:37): You have a wiki server running in the office?
tsziklay - (19:38): right, thats what I don't know how to do is "create a page", is there anything to read about on how to do this? the xwiki documentation I have found is incredibly vague
tsziklay - (19:38): cjdelisle: we have a wiki server but we want to create a new one that will incorporate this python functionality
cjdelisle - (19:39): if you have an xwiki server (which is relativity new), you have python functionality.
tsziklay - (19:39): cjdelisle: I am going to be running the preliminary wiki on a crappy server with some AMD processor and 2 gb of ram, basically something that was not meant to serve data :)
tsziklay - (19:40): cjdelisle: right, and I don't have an xwiki server. we have a different wiki currently and want to transition to xwiki
tsziklay - (19:40): cjdelisle: so my task is to figure out how to get an xwiki server up and running. then add a python script to it.
cjdelisle - (19:41): The hard part is getting the server up, running python is very easy.
tsziklay - (19:42): yup. can you point me to any kind of documentation about getting the server up?
cjdelisle - (19:43): http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Installation
cjdelisle - (19:43): xwikibot, you need more features.
jvdrean left at 19:46 (Quit: Leaving.
cjdelisle - (19:46): hmm we're still having that ipv6 dependency problem.
tsziklay left at 19:47 (Ping timeout: 264 seconds
tsziklay joined #xwiki at 19:49
tsziklay - (19:53): got some kind of error, booted me and rejoined me just now
tsziklay - (19:53): btw thanks for the link cjdelisle. I assume there is also more information on actually creating a simple page?
cjdelisle - (19:54): I'm not sure. I would imagine. if not then you can write it ;)
tsziklay - (19:57): alright, I guess I'll jump off that bridge when I get there
cjdelisle - (19:57): how optimistic. I'll be there to help (push) you. :)
tsziklay - (19:57): I am thinking that I'll just install the standalone distribution. are there any disadvantages to that other than possibly not being familiar with what they give you?
cjdelisle - (19:58): the disadvantage of the default (zip/exe file) distribution is it can't handle lots of pages or large attachments.
cjdelisle - (19:59): if you're testing then definitely use the default.
tsziklay - (20:00): actually I already have tomcat6 and mysql installed on the machine I will be using. Can I incorporate those instead?
cjdelisle - (20:01): you can. Do you plan to be uploading pages with chinese writing?
cjdelisle - (20:03): or really any pages which use characters outside of the common English, French, Spanish, German etc. languages?
tsziklay - (20:04): no, just english :)
cjdelisle - (20:05): Ok then mysql is fine.
cjdelisle - (20:05): Mysql has a limitation which prevents some languages from working correctly.
tsziklay - (20:07): guess they do databases differently in the orient :D
tsziklay - (20:07): ok, so if I am going to use mysql and tomcat that I already have installed, then I don't want to do the default distribution right?
tsziklay - (20:08): cjdelisle: I'll want to do the manual zip install instead?
cjdelisle - (20:09): If you want to use mysql/tomcat, you need the .war file but I would do the default if you just need something quick to show the boss.
lucaa left at 20:09 (Ping timeout: 276 seconds
tsziklay - (20:11): it doesn't need to be done by today, but I do only have 1 week left. I'm a temp intern here :)
cjdelisle - (20:12): There are a number of pitfalls and traps when installing with mysql, a number of others with tomcat. I definitely recommend the .zip file if you're on linux, .exe if windows (server).
lucaa joined #xwiki at 20:15
cjdelisle - (20:17): Sixy.ch: directory of IPv6 enabled web sites 3846 sites in database FAIL
tsziklay - (20:18): I am on linux, so I guess I will do the .zip file for now
tsziklay - (20:19): it does have links to instructions on how to install with tomcat and mysql, and since I have those on the machine already wouldn't it be a little easier?
tsziklay - (20:19): cjdelisle ^
cjdelisle - (20:19): No definitely not easier. The zip version has the database and server included.
cjdelisle - (20:20): you just type start-wiki.sh
cjdelisle - (20:52): hmm. something missing in the certificates is the protocol version.
abusenius - (21:21): protocol?
cjdelisle - (21:22): Well foafssl gets the modulus from the page and parses it as xml.
cjdelisle - (21:23): suppose foafssl caught on and everyone was using it. Then we would optimize the connection to use like 1 udp packet or something.
cjdelisle - (21:24): So you put a version number in the cert so the client knows how it is allowed to call the server.
abusenius - (21:26): yea, it wouldn't hurt
abusenius - (21:26): does foafsll do something like this already?
cjdelisle - (21:26): Maybe the client should send a header telling how it can receive the server's response.
cjdelisle - (21:26): hah, no.
cjdelisle - (21:27): Nobody ever seems to make protocols upgradable.
cjdelisle - (21:36): http://maven.xwiki.org/site/xwiki-core-parent/xwiki-core/apidocs/index.html?overview-summary.html
cjdelisle - (21:37): yay, upgraded to 2.5-SNAPSHOT, now I can close the first issue in what seems like forever.
lucaa left at 21:51 (Quit: Leaving.
tmortagne left at 22:09 (Quit: Leaving.
vmassol joined #xwiki at 22:12
MartinCleaver joined #xwiki at 22:26
florinciu1 left at 22:59 (Quit: Leaving.
MartinCleaver left at 23:11 (Ping timeout: 260 seconds
MartinCleaver joined #xwiki at 23:18
vmassol left at 23:24 (Ping timeout: 246 seconds
vmassol joined #xwiki at 23:30
MartinCleaver left at 23:33 (Quit: MartinCleaver
MartinCleaver joined #xwiki at 23:36
tsziklay - (23:51): cjdelisle: I came across this site that explains how to install xwiki on ubuntu with tomcat and mysql. However I don't know if I should do this because I already have a tomcat/mysql server on the machine for something else, is it possible to have two war files and basically two server apps (grails and xwiki) on the same machine like that? i dont want to lose the functionality of the first one
tsziklay - (23:51): here is the site btw http://halfahairwidth.blogspot.com/2009/09/how-to-install-xwiki-on-ubuntu.html
tsziklay - (23:52): the instructions on that site look good up until it gets to the point where its editing the "hibernate" file and doing some xwiki user configuration...
cjdelisle - (23:52): you can have 2 servers on the same machine, you have to change the port number if you want to run both at once.
tsziklay - (23:52): ah, I see. I can change the xwiki one fairly easily right?
cjdelisle - (23:53): 'editing the "hibernate" file' <-- Why I suggested the easy installation.
cjdelisle - (23:53): don't put a lot of data in the easy install version, it might be hard to port it over. Fine for testing though.
tsziklay - (23:54): and btw if I have my /vim/tomcat/webapps/ file have the xwiki war file AND another war file (for my grails server) will that mess anything up?
tsziklay - (23:54): i.e. will it not know which war file to call or anything like that when I start either server?
tsziklay - (23:55): cjdelisle: I may end up doing the easy install if time doesn't allow me to figure out the difficult install, but since my superiors want to upgrade to xwiki for the company's wiki structure I assume that it would be best to have a more fleshed out version that is capable of handling many pages
cjdelisle - (23:56): You can run multiple war files on one tomcat.
cjdelisle - (23:57): But I think you ought to get something running so you can start learning how to use it (create pages) as quick as possible.
tsziklay - (23:58): yeah that may be better. plus I started downloading the war file alone and the only location to download from is France, so I'm stuck waiting for a nice several-hours long download :(
mflorea left at 00:09 (Quit: Leaving.
florinciu left at 00:52 (Read error: Connection reset by peer
tsziklay - (01:11): sdumitriu: first things first, your name is hard to spell :) second, xwiki supports macros, specifically for me the Python macro. Could I basically do something like "if link == click, run pythonMacroCode{{my code here to execute bash script}}
tsziklay - (01:11): " ?
abusenius left at 02:31 (Ping timeout: 240 seconds
tsziklay - (02:42): sdumitriu ^
tsziklay left at 02:47 (Quit: ChatZilla 0.9.86 [Firefox 3.6.8/20100722155716]
boscop_ joined #xwiki at 03:53
boscop left at 03:56 (Ping timeout: 265 seconds
MartinCleaver left at 04:00 (Quit: MartinCleaver
venkatesh joined #xwiki at 04:39
venkatna joined #xwiki at 04:44
venkatesh left at 04:46 (Ping timeout: 240 seconds
sdumitriu left at 05:48 (Ping timeout: 276 seconds
venkatna left at 07:16 (Quit: Leaving
venkatesh joined #xwiki at 07:21
asrfel joined #xwiki at 08:54
vmassol joined #xwiki at 09:01
vmassol left at 09:03 (Client Quit
mflorea joined #xwiki at 09:21
sylviarusu joined #xwiki at 09:28
sburjan joined #xwiki at 09:32
sylviarusu left at 09:33 (Client Quit
sylviarusu joined #xwiki at 09:36
tmortagne joined #xwiki at 09:40
jvdrean joined #xwiki at 09:59
Enygma` joined #xwiki at 10:04
sburjan left at 10:10 (Read error: Connection reset by peer
sburjan joined #xwiki at 10:11
sdumitriu joined #xwiki at 10:13
sdumitriu left at 10:14 (Client Quit
sdumitriu joined #xwiki at 10:14
abusenius joined #xwiki at 10:16
sylviarusu left at 10:18 (Quit: Leaving.
sylviarusu joined #xwiki at 10:22
cjdelisle - (10:25): Hewlett-Packard owns 33,554,432 Class A IPv4 addresses, or 1/128 of the IPv4 space -- apparently more than the countries of India and China combined
cjdelisle - (10:26): ( http://mark.koli.ch/2009/07/ipv4-doomsday-were-running-out-of-ipv4-addresses-ipv4-address-exhaustion.html )
sdumitriu - (10:27): Yeah, I've heard the IPv4 doosday warning for several years now, and it didn't happen
cjdelisle - (10:27): it's more like the ipv4 address market day.
cjdelisle - (10:29): What is interesting is that the problem is not too many devices, it is a problem of people claiming everything in sight. Thus ip6 won't solve anything (even if it did get off the ground).
sburjan - (10:31): it's said this year in september will get exhausted
sburjan - (10:32): well .. don't think just at computers. Phones have IP addresses too
sburjan - (10:33): and they want to "eliminate" NAT from this world :)
cjdelisle - (10:34): Problem is switching to ip6 just means people will rush to claim all the address space in a bigger block. The problem is human nature.
sburjan - (10:35): I don't think the same mistake will be made again as with IPV4
cjdelisle - (10:35): You know about the provider independent swamp?
sburjan - (10:35): the class A was too huge
sburjan - (10:35): nope
sburjan - (10:36): and now smaller companies that have IP addresses bought and wanted more, they hot IP addresses from other subnets ...
KermitTheFragger joined #xwiki at 10:36
sburjan - (10:36): and the level of IPv4 address framentation is huge
cjdelisle - (10:36): Early on they just handed out 256 address blocks to anyone who wanted them. You could take these blocks to anywhere in the world.
cjdelisle - (10:36): Yes, fragmentation.
sburjan - (10:37): and everybody would like continous subnets of ip's
cjdelisle - (10:37): If you download the global routing table and grep /24 you find there are a huge number of tiny little nets which have to be in every router because they are provider independent.
sburjan - (10:38): for example the university from Iasi waited almost 2 years so it can get continous IPaddresses .. switched with other owners in order to get a continous block
cjdelisle - (10:38): yea, arin/interNIC ripped off everyone
cjdelisle - (10:39): anyway, in 1998, 80% of the global routing table was these little tiny class C networks from "the swamp"
cjdelisle - (10:39): *88%
sburjan - (10:39): don't know exactly what the swamp is :)
sburjan - (10:40): but most of us are class c :)
cjdelisle - (10:40): Imagine having a class c and you could get it routed to anywhere in the world.
sburjan - (10:40): only big companies have bigger classes .. which exhausted long ago
cjdelisle - (10:41): hah, in the USA they won't even sell anything less than a /20
sburjan - (10:42): .. /20 is pretty huge
sburjan - (10:43): 4096 hosts
cjdelisle - (10:43): Anyway the class C's (/24's) in Europe are ok because the global routers just send all packets in the ripe /8 blocks to Europe, then the European backbones sort them out.
cjdelisle - (10:44): There are a bunch of /24's which are not in a European block or really in any block.
sburjan - (10:44): reserved ?
sburjan - (10:44): or not sold yet :)
cjdelisle - (10:45): No they were assigned before there was any arin/ripe/apnic.
sburjan - (10:45): anyway .. the whole world is on CIDR now
sburjan - (10:45): classes are obsolete now
cjdelisle - (10:45): Each of those /24's is in every global router in the world. The core operators hate them because they eat up ram in all the core routers.
cjdelisle - (10:46): Yea, these are the addresses from before CIDR.
sburjan - (10:46): I didn't get the period with class domains .. I was too little :)
cjdelisle - (10:46): https://encrypted.google.com/search?q=192%2F8+swamp
sburjan - (10:46): I discovered the internet in the CIDR era :))
cjdelisle - (10:47): Meh, I'm just learning about it now.
sburjan - (10:47): me too :)
sburjan - (10:47): CISCO classes :)
cjdelisle - (10:48): reading nanog and arin mailing lists and papers.
sburjan - (10:48): how can you read a mailing list without subscribing ?
cjdelisle - (10:48): google
cjdelisle - (10:49): Anyway everyone wants to own these swamp addresses because they are yours and you can have the routed to isp's anywhere in the world.
sburjan - (10:50): 192.8 don't quite understand this notation ?
sburjan - (10:51): 192/8
sburjan - (10:51): this would mean a subnet only with the first 8 bits only ?
sburjan - (10:51): 255.0.0.0 ?
vmassol joined #xwiki at 10:51
cjdelisle - (10:51): 192/8 == 192.*.*.*
cjdelisle - (10:51): yea first 8 bits.
sburjan - (10:52): intresting
sburjan - (10:52): anyway the 192.168.*.* is reserved as internal. That segment isn't gonna get routed
cjdelisle - (10:53): yea, but like 192.24.31.0/24 is worth it's weight in gold.
sburjan - (10:54): yeah,
cjdelisle - (10:55): so in 1999 the swamp accounted for 88% of the internet routing table. it was mostly in 192/8 but some in 193-200 and 205-210.
sburjan - (10:56): anyway .. the switch to IpV6 will take time
sburjan - (10:56): even if most systems already support ipv6 already
sburjan - (10:56): If I'd have a IpV4 add, i wouldn't switch to v6
cjdelisle - (10:56): * it accounted for 49393 prefixes (mostly /24's)
cjdelisle - (10:57): as of 2006, the swamp accounted for 161,287 prefixes.
cjdelisle - (10:58): (still 88% of the global routing table)
cjdelisle - (10:59): no longer confined to 192, it has spread across all of the prefixes.
cjdelisle - (10:59): The problem is people want ip address space which is provider independent.
cjdelisle - (11:00): When they have provider independent address space, then they take it to another provider and the former block is nolonger continuous.
cjdelisle - (11:01): And thus all address space is degenerating into a swamp.
cjdelisle - (11:02): The end point is that every router must have a rule for every address.
cjdelisle - (11:03): There is some drama in the arin lists because they are handing out provider independent blocks in the ip6 space.
sburjan - (11:04): yeah..
sburjan - (11:05): but not all of the ISP's support independent address
sburjan - (11:05): the usually have a pool .. and assign ip's to clients from that pool (I'm sure you already know that)
vmassol left at 11:05 (Quit: Leaving.
sburjan - (11:06): I didn't knew that there are end-point ISP's that allow you to use your bought IP address
cjdelisle - (11:06): yup, you can see "pool" in my reverse lookup. But for websites, there are huge advantages to owning your ip addresses.
sburjan - (11:06): ofc
sburjan - (11:07): anyway.. I'm not worried about this topic :)
sburjan - (11:07): I don't own nothing :))
sburjan - (11:07): only one domain
cjdelisle - (11:07): You can multihome a website just by getting 2 (or more) isps and announcing the same address block from each.
sburjan - (11:08): multihome like multiple hosts hosting the same site ? .. for redundancy ?
florinciu joined #xwiki at 11:08
cjdelisle - (11:08): Redundancy, DDoS mitigation, etc.
cjdelisle - (11:09): The problems with ipv4 are that people "land rush" for as many addresses as possible then don't use them, and that everyone wants provider independent space and do all address space deaggrigates until the internet is unroutable.
sburjan - (11:10): yeah ..
cjdelisle - (11:10): Neither of these problems are fixed by ipv6 and the second problem is made worse because the swamp will be bigger.
sburjan - (11:10): we'll need powerful routers full of routes
boscop__ joined #xwiki at 11:10
sburjan - (11:11): anyways there are dynamic routing protocols
cjdelisle - (11:11): Imagine a router that has to route to every /24 block. 16,777,216 entries in the routing table.
sburjan - (11:11): yes but when you have so many routes .. i guess you use a dynamic protocol
sburjan - (11:12): no one in the sane mind will write 16.777.216 static router in the routing table
sburjan - (11:12): this is madness
boscop_ left at 11:12 (Read error: Connection reset by peer
cjdelisle - (11:12): No they would all be advertised through bgp so the router would just find all 17 million routes.
sburjan - (11:13): yeah..
cjdelisle - (11:13): If I was working on the core, I would be looking at ways to allow people to pick their own ip addresses like a global pool.
vmassol joined #xwiki at 11:13
cjdelisle - (11:14): aka rewrite bgp.
cjdelisle - (11:14): Instead I'll watch as the delicious drama unfolds.
sburjan - (11:15): don't worry
sburjan - (11:16): there are really good networking people out there
sburjan - (11:16): they won't allow the internet to fall apart
sburjan - (11:17): maybe they will impose some drastic restrictions
cjdelisle - (11:17): what? me? worry?
sburjan - (11:17): :)
cjdelisle - (11:17): http://198.108.95.21/meetings/nanog37/presentations/philip-smith.pdf <-- Great presentation (where I got most of my statistics)
sburjan - (11:18): cisco :)
sburjan - (11:18): but it's retty old .. 2006
cjdelisle - (11:18): still really good.
cjdelisle - (11:18): So ipv6 _and_ dnssec both fail :)
sburjan - (11:19): well .. add as it may sound .. I'm a big fan of NAT
sburjan - (11:20): but NAT has a lot of haters because in a large subnet, you need really expensive equipment to NAT.
cjdelisle - (11:20): I love NAT, awesome security.
sburjan - (11:20): special boards
sburjan - (11:20): if you start NAT on a regular CISCO router in a bigsubnet .. the router will freeze instantly
sburjan - (11:20): in less than a second
cjdelisle - (11:21): oh because it runs out of ram trying to track all connections?
sburjan - (11:21): but yeah, filtering content, security,etc .. .. +1 to NAT
sburjan - (11:21): yes
sburjan - (11:21): the guys from my university told me
sburjan - (11:21): they tried to nat around 10000 computers.. the router froze instantly
cjdelisle - (11:22): Yea and eventually you run out of port numbers.
sburjan - (11:22): they had to buy a special board for NAT.. which was expensive
sburjan - (11:22): yes ..
sburjan - (11:22): but let;s get serious .. if you have a ip address block, and know how to subnet it .. almost all your problems are solved
cjdelisle - (11:23): I think it'd be cool to start a business doing anonymous proxying using a nat which assigns you an ip per connection.
sburjan - (11:23): create a subclass for server, subclass for workstations, subclass for manager, or whatever. each of them with their restrictions
cjdelisle - (11:24): like assign the user a session number when they log on and hash that session number with the ip of the site they are going to and mod that against the number of addresses in the pool to get the address to assign.
cjdelisle - (11:25): That way every time I go to some site, I still show up with the same ip, but when I go to a different site, different ip.
sburjan - (11:25): hmmm
sburjan - (11:25): you'd need a big pool
cjdelisle - (11:26): meh, a /24 should do it.
sburjan - (11:26): so your idea is to sell proxy services ?
cjdelisle - (11:27): you wouldn't be *guarenteed to have a different ip, but it would be near impossible to connect your ip from sending mail to your ip from browsing google etc.
sburjan - (11:27): and what would be t he advantage ?
cjdelisle - (11:27): "sell proxy services" suddenly it sounds like a horrible idea, like who's going to be buying that? and what for? oh... never mind
sburjan - (11:28): and if you don;t do it over a VPN .. or your provider doesnt provide this service .. it's useles ..
sburjan - (11:29): logged connections everywhere
sburjan - (11:29): traceable
cjdelisle - (11:29): It's just if you don't want people to be building a dosser on you from your internet searches etc.
cjdelisle - (11:29): In other news, the new javadoc is built: http://hudson.xwiki.org/job/xwiki-platform-core-site-job/site/xwiki-core/apidocs/index.html?com/xpn/xwiki/api/package-summary.html
cjdelisle - (11:30): 3 hours to run.
vmassol left at 11:30 (Quit: Leaving.
sburjan - (11:30): are't we gonna lose some daythe com.xpn prefix ?
cjdelisle - (11:31): well, all new stuff is org.xwiki but I don't see the old core going anywhere any time soon.
sburjan - (11:31): slow migration, don't know
cjdelisle - (11:32): Nobody's working on a replacement for the database driver.
sburjan - (11:32): the link you gave.. it's still building ? cos the link don't work
sburjan - (11:33): the com.xpn's are only for database driver ?
cjdelisle - (11:33): Hudson's slow, try the link again.
cjdelisle - (11:33): com.xpn is everything in the old core which is a lot of stuff.
sburjan - (11:33): well basically the core is com.xpn ? because I don't see any org.xwiki there
sburjan - (11:34): I cna see :)
sburjan - (11:34): *can
sburjan - (11:34): I see a lot of methods from the core aren't documented
cjdelisle - (11:35): Yea.
cjdelisle - (11:36): It's hard to just add documentation being unsure of how the code works, maybe you miss an idiosyncrasy of the function which proves important.
sburjan - (11:36): yeah..but the doc should have been written when the actual code was written, right ?
cjdelisle - (11:37): yup.
cjdelisle - (11:38): hmm, hudson builds are looking pretty good. Everything is building and most stuff has no failed tests.
sburjan - (11:40): agent 1 or agent 2 ?
cjdelisle - (11:41): ?
sburjan - (11:41): we have 2 hudson server
sburjan - (11:41): s
sburjan - (11:41): actually 3, but one of them is offline
cjdelisle - (11:42): not really, there's only one server, agent 1 and 2 are just slaves working for the server.
cjdelisle - (11:42): I'm looking at http://hudson.xwiki.org/
sburjan - (11:42): me too
sburjan - (11:42): on the left side
cjdelisle - (11:42): Only problem is portlet and I know nothing about portlets.
sburjan - (11:43): so agent 1,2,3 are only slaves, and there is a bigger master server somewhere ?
cjdelisle - (11:45): the master is the computer which hosts myxwiki.org
sburjan - (11:45): and that doesn;t build anything ? it's only master as a centralized info source ?
sburjan - (11:45): so he's doing the building too
cjdelisle - (11:47): I think the master does not build anything (too much work while also hosting myxwiki, maven.xwiki.org and hudson.xwiki.org)
sburjan - (11:52): I'm seeing some redundancies in the tests
sburjan - (11:52): same test written twice, in diffrent projects, using diffrent versiosn of Selenium
sburjan - (11:53): when the people get back from vacation, I'll call for a tests meeting
cjdelisle - (11:53): Does anyone know how I should go about getting the hudson site job to push it's results to maven.xwiki.org? I'm assuming I want to put something into the "Deploy artifacts to Maven repository" box. Is there a password or something for maven?
abusenius - (11:55): hi everyone
sburjan - (11:55): cjdelisle, : can you help me and see if you can find ui-tests on the hudson ? i can;t find them, when try run last time, etc
sburjan - (11:55): hi abusenius
abusenius - (11:55): cjdelisle, seems that X509CryptoService is generating invalid certs
cjdelisle - (11:56): argh.
abusenius - (11:56): the web ID must be absolute
abusenius - (11:56): otherwise there is some exception on deserialization
cjdelisle - (11:56): ahh, I thought I had that set to give the external url.
abusenius - (11:56): getUserDocURL returns relative user url
cjdelisle - (11:57): should be using getDocument("user document name").getExternalUrl()
abusenius - (11:58): well, the problem is, it comes from the bridge
abusenius - (11:58): and it doesn't return XWikiDocument, but some interface
cjdelisle - (11:59): sburjan: http://hudson.xwiki.org/job/xwiki-product-enterprise-tests/com.xpn.xwiki.products$xwiki-enterprise-test-ui/
abusenius - (11:59): I can't find a way to get external url...
abusenius - (12:00): ah, I guess its possible to cast DocumentModelBridge to xwikidoc
cjdelisle - (12:00): no, then you need to depend xwiki-core
abusenius - (12:00): but then we'll have dependency on core
abusenius - (12:01): maybe there is some FancyAndAbsolutelyNotIntuitiveEntitySerializer or something?
cjdelisle - (12:01): xwiki-core will probably end up depending on xwiki-crypto and then maven says "enough of this rat's nest!"
cjdelisle - (12:01): lol
cjdelisle - (12:03): btw: see the new api javadoc?
sdumitriu - (12:03): You could use commons-beanutils or something like that
sdumitriu - (12:03): But that's a bit of overkill
abusenius - (12:03): extend document access bridge?
sburjan - (12:03): thanks Caleb
abusenius - (12:04): re javadoc, yes, cool
sdumitriu - (12:04): abusenius: There's a xwiki-url which should have the getExternalURL method
sdumitriu - (12:04): But it doesn't yet
abusenius - (12:05): :D
abusenius - (12:06): but there is getServerURL, which is already something
cjdelisle - (12:06): hmm, last I looked at xwiki-url it was too fragile to be any use for anything.
abusenius - (12:07): hm, or maybe not
abusenius - (12:08): last time I looked at it, it was completely useles
cjdelisle - (12:09): xwiki-url? When I tried to use it it blew up because I was using an ip address.
cjdelisle - (12:10): I'm going to go find something to eat, be back later.
abusenius - (12:10): I tried to use it for csrf-token, but it gives no access to the url parameters, so I gave up
abusenius - (12:11): *sigh*
sylviarusu left at 12:22 (Quit: Leaving.
sylviarusu joined #xwiki at 12:34
yiiip joined #xwiki at 12:40
yiiip - (12:40): hi. how can i change the textcolor in the xwiki source editor
mflorea - (12:42): yiiip: through CSS, you can define a stylesheet extension and put there the needed CSS rules to change the color
yiiip - (12:42): hmm no i dont want to change the editor. just some text in the wiki
yiiip - (12:42): there has to be something like [color=red] fooo [/color]
mflorea - (12:43): oh, ok, then you can write this:
mflorea - (12:43): some (% style="color:red" %)red(%%) text
yiiip - (12:43): ok, thanks
mflorea - (12:44): http://platform.xwiki.org/xwiki/bin/view/Main/XWikiSyntax#HParameters
abusenius left at 12:44 (Remote host closed the connection
nuvolari - (13:27): oh dear... is it possible to lock out everyone on xwiki by mistake?
nuvolari - (13:34): false alarm
nuvolari - (13:34): glassfish died :P
yiiip - (13:34): where can i edit how the various tags are displayed
jvdrean - (13:42): yiiip: /xwiki/bin/view/XWiki/TagCloud ?
florinciu1 joined #xwiki at 13:51
florinciu left at 13:54 (Ping timeout: 276 seconds
MartinCleaver joined #xwiki at 13:59
abusenius joined #xwiki at 14:00
cjdelisle - (14:03): abusenius: Should we just change the api to make the script provide the webid?
abusenius - (14:05): hm, we would need to pass it through in many cases
abusenius - (14:05): it is not a good solution imo
cjdelisle - (14:05): well, xwiki-crypto has a tree dependency structure so the number of changes is relatively small.
abusenius - (14:06): the users of that api would need to know what it is, provide something correct etc.
cjdelisle - (14:06): sure, but it's not a security problem because anyone can create a cert with anything on the server or off.
abusenius - (14:07): well, yes, it is a usability/design problem
cjdelisle - (14:07): Hmm, maybe would be a security issue if a "sign all new certs" authority was implemented.
abusenius - (14:07): that comes from the usability/design problem of xwiki api ^^
cjdelisle - (14:08): IMO the answer when there are dependencies it to provide all services then have the dependency rat's nest at the script level.
abusenius - (14:08): what is the web id url ured for?
abusenius - (14:08): *used
venkatesh left at 14:08 (Ping timeout: 240 seconds
cjdelisle - (14:08): FOAFSSL compatibility.
cjdelisle - (14:09): And maybe we will want to use it for permissions management.
abusenius - (14:09): well, and what is it used for there?
cjdelisle - (14:10): You go to a foafssl website, it reads the webid, loads the page at that address, parses the fingerprint, compares to the cert fingerprint, knows what website you belong to.
abusenius - (14:10): why not implement a component-friendly api to get the external ulr? it is obviuosly needed, not only for crypto
cjdelisle - (14:11): Making script provide everything is an attractive answer because we could drop all dependencies on model and bridge.
abusenius - (14:12): well, but it just makes *everything* that uses crypto be responsible for knowing internal things like foasfssl compatibility
abusenius - (14:13): for example, signed scripts couldn'T care less about that
cjdelisle - (14:13): no only everything which uses generate new cert (registration page)
abusenius - (14:13): but I agree that it is much easier than fixing xwiki-url
cjdelisle - (14:14): :)
cjdelisle - (14:14): I think xwiki-url and xwiki-action should be sandboxed.
abusenius - (14:14): wdym?
cjdelisle - (14:15): moved to contrib/sandbox until they work and we agree on them.
abusenius - (14:15): maybe
abusenius - (14:15): at least currently they are not really implemented, so nobody uses them
cjdelisle - (14:17): Yea, at best they are unused appendices, at worst they are suppressing development which might chose different directions.
cjdelisle - (14:17): s/chose/choose/
sburjan left at 14:17 (Ping timeout: 248 seconds
sylviarusu left at 14:19 (Read error: Connection reset by peer
abusenius - (14:19): ok, we can pass the webid url around, but what do we do with getCurrentUser()?
cjdelisle - (14:20): certsFromSpkac(final String spkacSerialization, final int daysOfValidity)
cjdelisle - (14:20): would become
abusenius - (14:20): it is good to have it, because it limits the user fro mgenerating certs for arbitrary people
cjdelisle - (14:21): certsFromSpkac(final String spkacSerialization, final int daysOfValidity, final String userDocumentURL, final String userDocumentReference)
abusenius - (14:21): userName?
cjdelisle - (14:21): userName == "JohnSmith"
abusenius - (14:21): fullUserName
cjdelisle - (14:22): userDocumentReference == "xwiki:XWiki.JohnSmith"
abusenius - (14:22): in any case, its not a eference
abusenius - (14:22): reference is DocumentReference or something
abusenius - (14:22): string is a name, path, maybe url, but not a reference
cjdelisle - (14:22): meh, until EntityReference is abandoned in favor of ObjectPointer ;)
sburjan joined #xwiki at 14:23
cjdelisle - (14:23): you Iasi guys all on wifi?
boscop__ is now known as boscop ([email protected]
abusenius - (14:25): seems that getUserDocURL is only used in crypto service for creating certs, so we can drop it
abusenius - (14:25): getUserName is also used in signedscripts
cjdelisle - (14:26): You are still using stuff from crypto.internel.*
cjdelisle - (14:26): ?
tmortagne left at 14:26 (Read error: Connection reset by peer
cjdelisle - (14:27): hmm. Why does crypto store the user name? I forgot.
abusenius - (14:27): yea, didn'T wanted to duplicate code
abusenius - (14:28): but if crypto will stop using it, we could move user doc utils to scripts
cjdelisle - (14:28): ^^ you are using internal "proprietary" api, I can change that tomorrow and it is not an api break
abusenius - (14:28): its still under heavy development :)
cjdelisle - (14:29): hmm, I think the user document url is better than the user name.
cjdelisle - (14:29): I forgot why I had the user DocumentReference (serialized as string) in the first place.
sburjan - (14:29): .
abusenius - (14:29): well, in principle name and url duplicates information
cjdelisle - (14:29): .
abusenius - (14:29): ,
abusenius - (14:30): we introduced the name to know which cert it is
abusenius - (14:31): and webid url is stored some sort of not very standard extension
cjdelisle - (14:32): uri is more specific, it is interwiki, we don't have to declare a dependency on model, it isn't tied to EntityReference. I like.
abusenius - (14:32): can we easily convert url to name?
tmortagne joined #xwiki at 14:32
cjdelisle - (14:32): why do you want to? just do a http get on that uri and parse the page.
cjdelisle - (14:33): we can put xml on the user's page with all the info you need.
abusenius - (14:33): if you implement a method in xwikicert to get the DocumentReference to the user doc fro mthat webid extension, I'm fine with dropping the name :)
cjdelisle - (14:33): still have a dependency on model then.
abusenius - (14:33): what? parsing a page? it the same xwiki
cjdelisle - (14:34): so?
abusenius - (14:34): it is *internal* stuff, why on earth should I use http get and a xml parser to get the freaking certificate stored in the db on the same machine?
abusenius - (14:34): from a component
cjdelisle - (14:35): what exactly do you need?
cjdelisle - (14:35): I mean you have the cert from the signature.
abusenius - (14:36): I want to make sure it is the correct one
abusenius - (14:36): i.e. I need to get the "real" cert by user name
cjdelisle - (14:36): k so you need to get a fingerprint from the user page.
abusenius - (14:36): I need to be sure this user is allowed to do what he is supposed to do
vmassol joined #xwiki at 14:37
abusenius - (14:38): no, I need to get the fingerprint from the DcumentReference and not some random URL hosted in south africa
abusenius - (14:38): *without* http get
cjdelisle - (14:38): South Aferica is cool.
abusenius - (14:38): *and* without any parsing, rpc, soap etc.
cjdelisle - (14:38): They have this lottery there and everybody wins.
abusenius - (14:38): yea, sure :)
cjdelisle - (14:39): What if the user's page has a signed note saying "I grant you authority to do this"?
abusenius - (14:40): if this page is on another computer in south africa, it doesn'T help much
cjdelisle - (14:40): why?
abusenius - (14:40): well, how do I get to the page of the person who sighned thaT?
abusenius - (14:41): *signed
cjdelisle - (14:41): well the signature contains a cert.
cjdelisle - (14:41): you read the cert, and that contains a url.
cjdelisle - (14:41): recurse :)
abusenius - (14:42): and beside thil, I'm -1 to anything that needs network to get a piece of data stored on the same pc in the db accessible from the same java vm
abusenius - (14:42): it is just ridiculous
abusenius - (14:42): yea, and the cert is also stored in south africa
abusenius - (14:43): the certs are self signed
vmassol left at 14:43 (Quit: Leaving.
cjdelisle - (14:43): well you have to continue recursion until you find a cert that you trust.
abusenius - (14:43): we cannot know if they are correct or not
yiiip - (14:43): when im listing all the documents containing a certain tag. how can i edit this view, for example write something behind every document title?
abusenius - (14:44): and how do I know which cert I trust if I cant get a cert of Admin stored in the *same* wiki
cjdelisle - (14:44): yiiip: it's one of the .vm files in /templates/ inside of the .war file. I think you're looking for docextras.vm?
cjdelisle - (14:44): That's why I would be for having one root cert stored in a config file.
cjdelisle - (14:45): that cert signs the admin cert and the admin cert can then sign anyone he wants.
yiiip - (14:45): cjdelisle: isnt this just for the document foort?
yiiip - (14:46): http://localhost:8080/xwiki/bin/view/Main/Tags?do=viewTag&tag=foo
abusenius - (14:46): don't you see how ridiculous it is to use network to access the data stored in RAM? its like talking via sattelite phone to a person sitting next to you
abusenius - (14:46): I'm also for having a root cert somewhere not in DB, but for a different reason
cjdelisle - (14:47): yiiip: You may be right, when I'm looking for something I usually pull up the source code and find an id="" for a tag near to what I want and then do a search for that id in the /templates folder.
cjdelisle - (14:47): *source code == html code.
flaviusolaru joined #xwiki at 14:47
cjdelisle - (14:48): abusenius: I don't know if I'd call loopback network access.
abusenius - (14:48): in comparision to a function call it is a post pigeon
cjdelisle - (14:49): how do you recommend getting the document?
abusenius - (14:49): like we do now
cjdelisle - (14:50): how is that? crypto doesn't get a document so I don't know.
abusenius - (14:50): it works, it does not depend on core
abusenius - (14:50): I mean doc reference
abusenius - (14:50): why do you need the doc?
cjdelisle - (14:50): I don't.
cjdelisle - (14:51): depending on bridge is not much better than depending on core, actually I would call it a hack.
abusenius - (14:51): there is no other choice
cjdelisle - (14:51): http :D
abusenius - (14:52): thats an ever worse hack
cjdelisle - (14:52): I don't think so.
abusenius - (14:52): it is
cjdelisle - (14:52): Proof?
abusenius - (14:52): it uses core internally (actually even more, the whole server), just like bridge
abusenius - (14:53): but instead of making a function call, you send a post pigeon, then scan the message and ocr it
cjdelisle - (14:53): It has a lot of advantages, core could be replaced, bridge could be removed, but if xwiki still works, http get still works.
cjdelisle - (14:53): It would work in a cluster.
abusenius - (14:54): lets decouple all components using http then
abusenius - (14:54): its soooo great
abusenius - (14:54): no dependencies any more
abusenius - (14:54): everything it totally flexible
abusenius - (14:55): you could even replace core by another wiki and nobody would notice
cjdelisle - (14:56): quite a bit simpler, you only provide one service for the internal and the world...
abusenius - (14:56): (that was irony, not a proposal ;) )
abusenius - (14:59): is there some FancyDocumentResolver that can convert external url -> DocumentReference
cjdelisle - (14:59): http://en.wikipedia.org/wiki/Terracotta_Cluster
cjdelisle - (14:59): there it is.
abusenius - (15:00): http://en.wikipedia.org/wiki/KISS_principle
cjdelisle - (15:01): XWikiUniformResourceIndicatorDocumentReferenceResolutionAndSerializationInterface ?
tmortagne - (15:02): abusenius: pretty sure xwiki-url is already used in core to find the DocumentReference from the URL
abusenius - (15:02): sounds good, very precise ^^
tmortagne - (15:02): see XWiki#getDocumentReferenceFromPath
abusenius - (15:02): does it also work on strange configurations?
tmortagne - (15:03): abusenius: depends what you mean
tmortagne - (15:03): all i know is that we are using it right now to parse all URL in standard
abusenius - (15:03): I mean, url rewriting, clusters, proxies etc.
abusenius - (15:04): I intent to use it PR handling, it should always work
cjdelisle - (15:04): +1 kiss principle, that's one reason why I like the http get and parse method. FOAFSSL is written, we don't have to trust the database, if we can't build a cert chain, then there is no trust.
cjdelisle - (15:05): * note: I'm not sold on http get myself, I just want to look at all aspects.
abusenius - (15:05): I find trusting the DB on *my* server is easier than trusting some other random site
cjdelisle - (15:06): I say only trust xwiki.properties.
abusenius - (15:06): you can't stop trusting the db, it contains all the data
cjdelisle - (15:07): sure you can, sign all data and verify on load.
abusenius - (15:07): either you sign really everything or you trust the db
abusenius - (15:07): and I can tell you, signing everything will not be implemented any time before version 4.0
cjdelisle - (15:08): Obviously the db can change a comment or a document, I'm talking about no trus for user permissions and PR.
abusenius - (15:08): which are stored in db
abusenius - (15:08): so whats the difference?
cjdelisle - (15:08): root cert is in a file, if the cert chain doesn't trace back to it, no trust.
abusenius - (15:08): (and just a side remark, xwiki will not be used by fbi, nsa or similar)
cjdelisle - (15:09): yet
cjdelisle - (15:09): ];)
cjdelisle - (15:09): meh? how did that ] get there?
abusenius - (15:09): thats an evil nsa trick
cjdelisle - (15:10): I know, just noticing it looks like horns.
abusenius - (15:10): :)
cjdelisle - (15:10): It's a conspiracy they put all the keys too close together on ,my keyboard.
abusenius - (15:11): I think we need to have a root cert to be able to export/import everything
cjdelisle - (15:11): hmm?
abusenius - (15:12): to be able to trust the content ofthe xar
abusenius - (15:13): if you export signed data and a self-signed cert, it is possible to exchange the cert, and nobody would notice
cjdelisle - (15:13): When you import the xar, the users who are imported would then be checked against the installed root cert.
abusenius - (15:13): yes, so we need to have something that is either built-in or not exported
cjdelisle - (15:14): Whoever imports the xar would have to have write on all the documents which is overwrites.
abusenius - (15:14): for things that we redistribute, built-in is better
abusenius - (15:14): well, yes
cjdelisle - (15:15): I would say ship a root cert but encourage users to change it.
abusenius - (15:15): I'd prefer ship root cert, and make client cert override it, if present
cjdelisle - (15:16): hmm. if we ship with an admin account, what will the webid be for the admin account?
abusenius - (15:16): it would be easier to go back to default
abusenius - (15:16): good question
cjdelisle - (15:17): So if we are going to do this, we have to also tackle the problem of bootstrapping a new wiki.
cjdelisle - (15:18): IMO it should make you register the admin account and then generate the root cert and let you download it and ask you to install it on the server.
abusenius - (15:18): there is no admin by default, only superadmin
abusenius - (15:18): admin is in the xar
abusenius - (15:19): maybe we should generate it on the first login as admin?
cjdelisle - (15:19): Yea, this would be a change. We would make the user register a root account just like linux does when you install.
abusenius - (15:19): and not ship admin in the xar
cjdelisle - (15:19): correct.
cjdelisle - (15:20): Well actually we could have an "admin like" user which is responsable for all xwiki dev team documents.
abusenius - (15:20): what if we allow not having webid for such cases?
cjdelisle - (15:21): how do you establish the trust chain?
abusenius - (15:21): we are forcing everyone to have a correct webid for "foafssl compatibility", most people don'T know what it is and dont care
abusenius - (15:22): trust chains do not depend on foafssl
cjdelisle - (15:22): I can gut the webid _and_ the user name but certs become less useful.
abusenius - (15:23): imo webid is only useful if you use this cert in foafssl
abusenius - (15:23): i.e. store it in browser, use it for logging in etc
cjdelisle - (15:24): foafssl is powerful, it has a lot of applications, I don't want to throw it out on a whim.
abusenius - (15:24): also an interesting question, what happens if you set up your wiki, and after a year decide to change the host?
cjdelisle - (15:24): aka change the uri.
abusenius - (15:24): I don't say we should throw it out, I just say it is not the most important thing
cjdelisle - (15:25): certs only last a year :D
cjdelisle - (15:25): win
abusenius - (15:25): well, after 6 months? :)
abusenius - (15:25): you'll have to change the whole chain
abusenius - (15:25): resign everything
cjdelisle - (15:26): Me? I would put an entry in my hosts file :)
abusenius - (15:26): well, you're not alone
abusenius - (15:27): how about other 157 users?
cjdelisle - (15:27): no on the server.
cjdelisle - (15:27): Oh, I'm still thinking http get :)
cjdelisle - (15:28): hosts file myOldWebAddress.com 127.0.0.1
abusenius - (15:28): http get is bad
abusenius - (15:28): mo we shouldn'T rely on url for such things in the hope that on the end there will be localhost
cjdelisle - (15:28): I agree it's slow but bad?
abusenius - (15:28): unreliable
cjdelisle - (15:29): what's more reliable than uri?
abusenius - (15:29): it might be on the other side of the planet
cjdelisle - (15:30): yea, if you change your dns address, you break everything, all the permalinks on the internet. Resigning is just a small part of the problem.
abusenius - (15:31): it also makes it very easy to redirect to another site to get the cert
cjdelisle - (15:31): that's a + right?
abusenius - (15:31): no
cjdelisle - (15:31): hm?
abusenius - (15:31): its a big -
abusenius - (15:31): think of malicious attackers, rouge certs etc
cjdelisle - (15:31): if rsa or sha1 get broken we're sunk.
abusenius - (15:32): instead of looking at the trusted db, you need to rely on certs stored elsewhere
cjdelisle - (15:32): Is that what you're talking about?
abusenius - (15:32): it xould be stolen
abusenius - (15:32): *could
cjdelisle - (15:32): so what if somebody wants to host my signatures? Better their bandwidth than mine.
abusenius - (15:33): it is yours *and* theirs bandwidth
abusenius - (15:33): remember, you wanted to do it recursively
abusenius - (15:34): so one request to your server will become 10 requests back and forth to south africa
cjdelisle - (15:34): there's sort of a DoS attack because they could send you on a wild goose chase across the internet trying to validate a cert but you can have a maximum hop count or something.
abusenius - (15:35): hosted on and 0wned windows with 2k/s modem connection
abusenius - (15:35): it is way too complicated and unreliable
cjdelisle - (15:36): have you read the rfc for TCP lately?
abusenius - (15:36): no :)
cjdelisle - (15:36): re complicated and unreliable ^^
abusenius - (15:37): exactly, don'T rely on it
cjdelisle - (15:37): it's the same way as https works.
abusenius - (15:39): ok, lets stop wasting our time
cjdelisle - (15:39): there does seem to be a problem though.
abusenius - (15:40): back to the original problem :) I'd prefer to implement getExternalURL somewhere, this would solve everything
cjdelisle - (15:41): If 20 people put permission objects on my page and 20 people put permission objects on each of their pages, you have an explosion of search directions you can take to try to resolve a cert :(
abusenius - (15:42): and if their pages are hosted elsewhere you can't even cache
cjdelisle - (15:42): Oddly enough it seems to be the same problem as trying to fix bgp.
abusenius - (15:43): anyway, seems that getExternalURL has to be in bridge, because the info is in the core
abusenius - (15:43): which is kind of bad
cjdelisle - (15:44): Ok, so you need the document reference (as string) right?
abusenius - (15:44): re what?
abusenius - (15:45): accessing cert?
cjdelisle - (15:45): you need xwiki:XWiki.JohnSmith to be in the cert?
abusenius - (15:45): well, it is easier to create a document reference from that
abusenius - (15:46): (the code is allready there)
cjdelisle - (15:46): If you don't need it then I'll remove it entirely.
abusenius - (15:46): what do you put to SubjectDN?
cjdelisle - (15:46): ""
abusenius - (15:46): and IssuerDN?
abusenius - (15:47): this is bad
cjdelisle - (15:47): it'll shorten the signatures some.
abusenius - (15:47): you will not be able to see who is it for
cjdelisle - (15:47): well you could just copy the webid in there but the signature gets longer.
abusenius - (15:47): SubjectDN is standard, the extension web id is using is not
yiiip left at 15:48 (Quit: Page closed
abusenius - (15:48): who cares about +50 byte
abusenius - (15:48): its about 1K allready
cjdelisle - (15:48): the signatures?
abusenius - (15:50): definitely
abusenius - (15:50): 4096 bit is 512 byte
abusenius - (15:50): even for 2048, you have signature, signature in cert, public key...
abusenius - (15:50): *4/3
cjdelisle - (15:50): browser generated cert = 2192 base64 chars.
cjdelisle - (15:50): 64 more chars = 1 more line of base64.
abusenius - (15:51): and? its like 5%
cjdelisle - (15:51): for duplicated text?
abusenius - (15:51): well, use the user name then :)
abusenius - (15:51): its short
cjdelisle - (15:52): blah, dependencies.
abusenius - (15:52): regex :)
cjdelisle - (15:52): hahaha
cjdelisle - (15:55): do you need the user name?
abusenius - (15:55): I need DocumentReference to user page and I need to see whos cert it is (n certificate manger in FF for example)
cjdelisle - (15:55): We could allow the script to specify a common name (foafssl did this).
abusenius - (15:55): and having a readable SubjectDN is more important than saving 50 byte
abusenius - (15:55): and allow to have dofferent name and url, great
abusenius - (15:55): *different
xwikibot joined #xwiki at 15:57
cjdelisle - (15:57): I was thinking xwikibot was hosted there, actually it was xwikibridge bot.
cjdelisle - (16:01): So I'm to understand that you will be needing the user document name.
mflorea left at 16:01 (Quit: Leaving.
abusenius - (16:01): yes
tmortagne1 joined #xwiki at 16:01
tmortagne left at 16:01 (Read error: Connection reset by peer
cjdelisle - (16:01): ouch, xwiki.org not doing well...
xwikibot joined #xwiki at 16:03
abusenius - (16:03): yea, would help for sure when the server is down, like now...
cjdelisle - (16:03): suddenly a cluster sounds nicer.
cjdelisle - (16:04): aka http get :)
abusenius - (16:04): noooooooooooo
cjdelisle - (16:04): :D
cjdelisle - (16:05): I know it's slow and I wish it was faster like dns or something.
abusenius - (16:06): as if dns is the fastest thing ever
cjdelisle - (16:06): it's pretty fast, you send a udp packet and the server sends one back.
abusenius - (16:07): it still takes dozens of milliseconds
sburjan left at 16:07 (Quit: Ex-Chat
cjdelisle - (16:07): dnssec is like you send a udp packet and it sends like 800.
abusenius - (16:07): fast is when it takes dozens of nanoseconds
cjdelisle - (16:08): so naturally you set the source port on your packet to ohhh twitter?
cjdelisle - (16:08): nanoseconds? java? lol
abusenius - (16:08): ok, at least microseconds on average ^^
cjdelisle - (16:09): I think it's pretty common to use caches which are on a different server.
cjdelisle - (16:09): aka network connection.
cjdelisle - (16:10): memcached
abusenius - (16:11): if the defferent server is in the other room as opposed to other continent it is a large improvement
abusenius - (16:12): but other room as opposed to another memory cell is not
abusenius - (16:12): (unless you use a huge numa system)
cjdelisle - (16:15): memcached uses tcp but the servers stay connected all the time.
cjdelisle - (16:15): udp is optional.
flaviusolaru left at 16:15 (Read error: Connection reset by peer
cjdelisle - (16:20): dozens of microseconds isn't really going to happen no matter what you do. Database loads?
cjdelisle - (16:21): Even if you get from cache, it has to clone the document, all the objects, the attachments etc.
cjdelisle - (16:25): getDocumentURL(DocumentReference documentReference, String action, String queryString, String anchor, boolean isFullURL);
cjdelisle - (16:25): ?
abusenius - (16:28): where is it?
cjdelisle - (16:31): proposed.
tmortagne1 left at 16:32 (Read error: Connection reset by peer
tmortagne joined #xwiki at 16:33
abusenius - (16:36): yes, something like this
cjdelisle - (16:39): I think what I need to do is change the hudson site build to say mvn clean site site:deploy correct?
cjdelisle - (16:40): (to get the maven.xwiki.org/site to be updated)
cjdelisle - (16:40): sdumitriu: tmortagne ? ^^
tmortagne - (16:42): cjdelisle: just mvn clean site:deploy i think
cjdelisle - (16:42): I tried mvn clean site:deploy locally and it said run site first.
cjdelisle - (16:42): I did mvn clean site site:deploy and it tried to connect via ssh so I figured it worked.
tmortagne - (16:42): ok, that's weird then
tmortagne - (16:43): but i don't know site plugin very well
cjdelisle - (16:43): Maven is documented really well :)
cjdelisle - (16:45): changed. and changed to be bound to agent2 since agent1 always seems to be busy.
cjdelisle - (16:46): and building.
abusenius - (16:46): great, there is a nice StandardXWikiURLFactory, which cannot work because HostResolver it uses is not implemented...
cjdelisle - (16:48): IMO committing stuff that is incomplete, doesn't work, isn't tested is wrong.
abusenius - (16:49): the test works, because HostResolver is mocked there :)
cjdelisle - (16:51): I don't like that style because you don't know that the HostResolver interface can possibly be implemented.
abusenius - (16:51): at least a "BIG PHAT WARNING: NOT IMPLEMENTED YET" would be cool
cjdelisle - (16:52): sandbox.
cjdelisle - (16:53): It would be nice to sandbox all nonfunctional code.
abusenius - (16:53): would be hard, some classes from that package are already used
cjdelisle - (16:53): well then they are functional.
cjdelisle - (16:56): I'm thinking about proposing adding "latest-release" and "second-latest-release" to svn which are externals pointing to the last release and release before last.
cjdelisle - (16:56): That way hudson need not be changed when a release happens.
cjdelisle - (16:56): not sure if it will save work or not though.
cjdelisle - (16:58): might just shift the work from hudson jobs to svn changes.
abusenius - (17:01): nice, just managed to convert external url to user name :)
abusenius - (17:01): it only took 8 lines and 3 new dependencies...
cjdelisle - (17:02): neat, can you trust it will be the same as the xwiki-core urlFactory?
abusenius - (17:04): probably not :)
abusenius - (17:05): I more or less copy-pasted XWiki#getDocumentReferenceFromPath, so it would fail too
cjdelisle - (17:07): I'm playing with maven versions plugin, it looks promising.
abusenius - (17:13): hm, seems that the url -> name conversion is not quite correct
MartinCleaver left at 17:14 (Quit: MartinCleaver
Enygma` left at 17:17 (Ping timeout: 240 seconds
tmortagne left at 17:28 (Read error: Connection reset by peer
tmortagne joined #xwiki at 17:29
lucaa joined #xwiki at 17:35
lucaa - (17:35): guys, it seems that platform does not build with a clean repo, due to commons-net:2.1 which is not found
lucaa - (17:36): oanat discovered on her machine and I tried too after deleting commons-net:2.1 and I got:
lucaa - (17:37): http://pastebin.com/176jQv6y
tmortagne - (17:39): indeed there is not 2.1 version on http://repo1.maven.org/maven2/commons-net/commons-net/
tmortagne - (17:40): looks like 2.1 has not been releases anyway...
tmortagne - (17:40): s/releases/released/
lucaa - (17:41): what is this? how did it endup in our deps then?
tmortagne - (17:43): sdumitriu: looks like you did the upgrade to commons-net in root pom.xml, any idea what it used to work ?
tmortagne - (17:44): s/what/why/
sdumitriu - (17:44): It appeared to be released, but then it was unreleased
tmortagne - (17:45): was this version very important for us or would be downgrade to 2.0 ?
tmortagne - (17:45): s/would be/could we/
sdumitriu - (17:45): Better downgrade
lucaa - (17:45): true, true I found other people on the web with the same pb
lucaa - (17:45): so it seems that at some point it was there and now it's not
tmortagne - (17:46): lucaa: it's not only maven issue, it says on commons-net website that the last version is 2.0
lucaa - (17:47): there are release changes though: http://commons.apache.org/net/changes-report.html#a2.1
tmortagne - (17:47): yep
asrfel left at 17:47 (Quit: Leaving.
tmortagne left at 17:51 (Ping timeout: 248 seconds
tmortagne joined #xwiki at 17:52
cjdelisle - (18:06): "This version was not released by Apache Commons and the project does not
cjdelisle - (18:06): know, what it actually contains."
cjdelisle - (18:07): a bit ominous
cjdelisle - (18:07): "Apache Commons PMC realized about two weeks ago that the mvn repo
cjdelisle - (18:07): contains artifacts for commons-net 2.1 which has never been released and
cjdelisle - (18:07): subsequently removed those from central"
abusenius - (18:14): nice
abusenius left at 18:31 (Ping timeout: 260 seconds
tsziklay joined #xwiki at 18:57
cjdelisle - (19:08): tsziklay: you had a question.
cjdelisle - (19:08): "xwiki supports macros, specifically for me the Python macro. Could I basically do something like "if link == click, run pythonMacroCode{{my code here to execute bash script}}"
tsziklay - (19:08): yes thats right
cjdelisle - (19:09): when the user clicks a link they load a page correct?
tsziklay - (19:09): right
cjdelisle - (19:09): So you could put something in the link like [[link to somewhere>>Some.Where?runScript=1]]
cjdelisle - (19:10): and at the page Some.Where, you put a python script like the following:
cjdelisle - (19:10): {{python}} if request.getParameter('runScript') == 1 : do something..... {{/python}}
cjdelisle - (19:11): that's pseudopython, I don't really know python.
cjdelisle - (19:13): Due to a bug in jython, you might have to begin your python macro with this snippet: http://code.xwiki.org/xwiki/bin/view/Snippets/AccessToBindingsInPythonSnippet
cjdelisle - (19:13): (that is in order to have the request object available to you.)
cjdelisle - (19:13): more information is here: http://platform.xwiki.org/xwiki/bin/view/DevGuide/Scripting#HPythonSpecificInformation
abusenius joined #xwiki at 19:14
cjdelisle - (19:15): you are a man of many ip addresses Alex.
abusenius - (19:17): lol
tsziklay - (19:19): I see, that sounds good cjdelisle. Is there any more documentation on xwiki so I know exactly what code I need to make a page?
cjdelisle - (19:20): You want to make a page programmatically? or you mean how to make a page manually?
KermitTheFragger left at 19:25 (Remote host closed the connection
tsziklay - (19:30): cjdelisle: I'm not sure, I just do not know how to make an xwiki page. my boss has indicated that he wants this "if click then run script" functionality and said xwiki is probably able to support it.
tsziklay - (19:31): cjdelisle: basically all I would need is a single page showing proof of concept for this; I do not need anything beyond say, a title, a URL, and a link that runs my python code.
cjdelisle - (19:32): Do you have to be able to view the page with the python code without running it?
tsziklay - (19:36): cdjelisle: that doesn't matter I think. my boss just wants a wiki page that any of our employees can have access too that will sort of run the script remotely. for now it doesn't need to be fleshed out beyond that.
cjdelisle - (19:36): so create a page and make the content this: {{python}} print 'hello world' {{/python}}
cjdelisle - (19:37): You have a wiki server running in the office?
tsziklay - (19:38): right, thats what I don't know how to do is "create a page", is there anything to read about on how to do this? the xwiki documentation I have found is incredibly vague
tsziklay - (19:38): cjdelisle: we have a wiki server but we want to create a new one that will incorporate this python functionality
cjdelisle - (19:39): if you have an xwiki server (which is relativity new), you have python functionality.
tsziklay - (19:39): cjdelisle: I am going to be running the preliminary wiki on a crappy server with some AMD processor and 2 gb of ram, basically something that was not meant to serve data :)
tsziklay - (19:40): cjdelisle: right, and I don't have an xwiki server. we have a different wiki currently and want to transition to xwiki
tsziklay - (19:40): cjdelisle: so my task is to figure out how to get an xwiki server up and running. then add a python script to it.
cjdelisle - (19:41): The hard part is getting the server up, running python is very easy.
tsziklay - (19:42): yup. can you point me to any kind of documentation about getting the server up?
cjdelisle - (19:43): http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Installation
cjdelisle - (19:43): xwikibot, you need more features.
jvdrean left at 19:46 (Quit: Leaving.
cjdelisle - (19:46): hmm we're still having that ipv6 dependency problem.
tsziklay left at 19:47 (Ping timeout: 264 seconds
tsziklay joined #xwiki at 19:49
tsziklay - (19:53): got some kind of error, booted me and rejoined me just now
tsziklay - (19:53): btw thanks for the link cjdelisle. I assume there is also more information on actually creating a simple page?
cjdelisle - (19:54): I'm not sure. I would imagine. if not then you can write it ;)
tsziklay - (19:57): alright, I guess I'll jump off that bridge when I get there
cjdelisle - (19:57): how optimistic. I'll be there to help (push) you. :)
tsziklay - (19:57): I am thinking that I'll just install the standalone distribution. are there any disadvantages to that other than possibly not being familiar with what they give you?
cjdelisle - (19:58): the disadvantage of the default (zip/exe file) distribution is it can't handle lots of pages or large attachments.
cjdelisle - (19:59): if you're testing then definitely use the default.
tsziklay - (20:00): actually I already have tomcat6 and mysql installed on the machine I will be using. Can I incorporate those instead?
cjdelisle - (20:01): you can. Do you plan to be uploading pages with chinese writing?
cjdelisle - (20:03): or really any pages which use characters outside of the common English, French, Spanish, German etc. languages?
tsziklay - (20:04): no, just english :)
cjdelisle - (20:05): Ok then mysql is fine.
cjdelisle - (20:05): Mysql has a limitation which prevents some languages from working correctly.
tsziklay - (20:07): guess they do databases differently in the orient :D
tsziklay - (20:07): ok, so if I am going to use mysql and tomcat that I already have installed, then I don't want to do the default distribution right?
tsziklay - (20:08): cjdelisle: I'll want to do the manual zip install instead?
cjdelisle - (20:09): If you want to use mysql/tomcat, you need the .war file but I would do the default if you just need something quick to show the boss.
lucaa left at 20:09 (Ping timeout: 276 seconds
tsziklay - (20:11): it doesn't need to be done by today, but I do only have 1 week left. I'm a temp intern here :)
cjdelisle - (20:12): There are a number of pitfalls and traps when installing with mysql, a number of others with tomcat. I definitely recommend the .zip file if you're on linux, .exe if windows (server).
lucaa joined #xwiki at 20:15
cjdelisle - (20:17): Sixy.ch: directory of IPv6 enabled web sites 3846 sites in database FAIL
tsziklay - (20:18): I am on linux, so I guess I will do the .zip file for now
tsziklay - (20:19): it does have links to instructions on how to install with tomcat and mysql, and since I have those on the machine already wouldn't it be a little easier?
tsziklay - (20:19): cjdelisle ^
cjdelisle - (20:19): No definitely not easier. The zip version has the database and server included.
cjdelisle - (20:20): you just type start-wiki.sh
cjdelisle - (20:52): hmm. something missing in the certificates is the protocol version.
abusenius - (21:21): protocol?
cjdelisle - (21:22): Well foafssl gets the modulus from the page and parses it as xml.
cjdelisle - (21:23): suppose foafssl caught on and everyone was using it. Then we would optimize the connection to use like 1 udp packet or something.
cjdelisle - (21:24): So you put a version number in the cert so the client knows how it is allowed to call the server.
abusenius - (21:26): yea, it wouldn't hurt
abusenius - (21:26): does foafsll do something like this already?
cjdelisle - (21:26): Maybe the client should send a header telling how it can receive the server's response.
cjdelisle - (21:26): hah, no.
cjdelisle - (21:27): Nobody ever seems to make protocols upgradable.
cjdelisle - (21:36): http://maven.xwiki.org/site/xwiki-core-parent/xwiki-core/apidocs/index.html?overview-summary.html
cjdelisle - (21:37): yay, upgraded to 2.5-SNAPSHOT, now I can close the first issue in what seems like forever.
lucaa left at 21:51 (Quit: Leaving.
tmortagne left at 22:09 (Quit: Leaving.
vmassol joined #xwiki at 22:12
MartinCleaver joined #xwiki at 22:26
florinciu1 left at 22:59 (Quit: Leaving.
MartinCleaver left at 23:11 (Ping timeout: 260 seconds
MartinCleaver joined #xwiki at 23:18
vmassol left at 23:24 (Ping timeout: 246 seconds
vmassol joined #xwiki at 23:30
MartinCleaver left at 23:33 (Quit: MartinCleaver
MartinCleaver joined #xwiki at 23:36
tsziklay - (23:51): cjdelisle: I came across this site that explains how to install xwiki on ubuntu with tomcat and mysql. However I don't know if I should do this because I already have a tomcat/mysql server on the machine for something else, is it possible to have two war files and basically two server apps (grails and xwiki) on the same machine like that? i dont want to lose the functionality of the first one
tsziklay - (23:51): here is the site btw http://halfahairwidth.blogspot.com/2009/09/how-to-install-xwiki-on-ubuntu.html
tsziklay - (23:52): the instructions on that site look good up until it gets to the point where its editing the "hibernate" file and doing some xwiki user configuration...
cjdelisle - (23:52): you can have 2 servers on the same machine, you have to change the port number if you want to run both at once.
tsziklay - (23:52): ah, I see. I can change the xwiki one fairly easily right?
cjdelisle - (23:53): 'editing the "hibernate" file' <-- Why I suggested the easy installation.
cjdelisle - (23:53): don't put a lot of data in the easy install version, it might be hard to port it over. Fine for testing though.
tsziklay - (23:54): and btw if I have my /vim/tomcat/webapps/ file have the xwiki war file AND another war file (for my grails server) will that mess anything up?
tsziklay - (23:54): i.e. will it not know which war file to call or anything like that when I start either server?
tsziklay - (23:55): cjdelisle: I may end up doing the easy install if time doesn't allow me to figure out the difficult install, but since my superiors want to upgrade to xwiki for the company's wiki structure I assume that it would be best to have a more fleshed out version that is capable of handling many pages
cjdelisle - (23:56): You can run multiple war files on one tomcat.
cjdelisle - (23:57): But I think you ought to get something running so you can start learning how to use it (create pages) as quick as possible.
tsziklay - (23:58): yeah that may be better. plus I started downloading the war file alone and the only location to download from is France, so I'm stuck waiting for a nice several-hours long download :(