Learn more about me at https://en.wikipedia.org/wiki/User:Sadads
My staff account can be found at [[Astinson]]
Learn more about me at https://en.wikipedia.org/wiki/User:Sadads
My staff account can be found at [[Astinson]]
Something to consider is that the two most popular, non-dashboard based tools for tracking have different forms of submission:
Suggested wording: "Discover potential participants for your project or event. Provide a list of articles that cover your activity's topical focus areas. Then, you will receive an invitation list of editors who may be interested in joining your project or event."
Most events that are "targeting" a wiki, would not have more than 10 or 20, a well participated in writing contest might have 40, otherwise the organizer would probably be selecting the "all wikis" option. I think a limit of 50 or 100 would handle most situations and unexpected use cases.
I think @ifried conclusion that all wikis, or no target wikis is kindof similar in could leave to confusing mixed measurement. Maybe you could clarify this in two stages: is the event about editing the Wikis? Which wikis? (All, or specific wikis)
I work and train people that are frequently working in multiple languages, and I get the impression that folks who do not have English are unlikely to discover the language change if it stays behind the registration wall. Moreover, multilingual users have very few hints that different language interfaces for the search, change the outcome of the search (i.e. using a spanish language term in the English search doesn't leverage structured data to also include other language terms, whereas using the English language term would).
They look good to me, I don't have any significant suggestions for
revisions.
+ 1 on Affiliate data portal use.
You can close this ticket. Its been resolved.
I am really worried about this feature -- reader-empowered flagging of content always creates a backlog that overwhelms editing communities and destroys the balance of priorities for the community's backlogs : if you aren't signed in and participating as an editor and are empowered to create "feedback" without experience in the community's curation process the quality of the feedback is almost always poor.
I believe that has changed and this could be closed - @FRomeo_WMF -- I believe this is now managed by you right?
+1 to all of what Imelda hightlights-- we could also leave a reference/link to the blocking/supressing log in the description of the Username removed for future investigators.
I favor the dual timezone situation: with both organizers and local timezones -- we are seeing a lot of cross-geography events in the movement, and I think with the pandemic et-al we are going to continue seeing the need for that kind of cross time zone coordination
Part of me is thinking from a trust and safety pesrpective, we would want all of the "deletes" to be soft deletes, with some of the soft deletes being hidden from organizers. @ifried happy to talk that through.
@Iflorez Sheet is updated! We are done with coding.
In part, a good reason to do this would be that we have indications (at least on English), that a lot more users were looking at the page than would be reasonable for an internal wiki workflow: https://pageviews.wmcloud.org/?project=en.wikipedia.org&platform=all-access&agent=user&redirects=0&range=latest-30&pages=MediaWiki:Bad_image_list
Yep, I was using the getlink urls like:
*https://link.gale.com/apps/doc/A171034250/GPS?u=wikipedia&sid=GPS&xid=dbbf72df
https://link.gale.com/apps/doc/A171034250/GPS?u=wikipedia&sid=GPS&xid=dbbf72df
, but they weren't generating citations on Citoid so fell back on the
preformatted citations which are three buttons over. *
Yep, still occurs for me to: tried adding palm tree to
https://commons.m.wikimedia.org/wiki/File:GOR_Ranggajati_Sumber.jpg#/random
and it opens wikidata.
Does this example of using the api work as well:
they appear to be generating the thumbnail from the Commons API
directly. @NavinoEvans
@Amire80 I am currently doing list-building research with @SGill around how to best build lists for Campaigns -- there is a larger need in the movement to have what I would describe as a "reusable worklist". Something that can be generated in both manual and semi-automated list building (i.e. we prototyped something like this at T187305 and https://tools.wmflabs.org/worklist-tool) -- that can then be integrated into both on-wiki and offwiki pages, and be portable so that if you want to use the same list for Event Metrics Education-Program-Dashboard or something like ListeriaBot, where its printed out into other pages.
In T225976#5297935, @hdothiduc wrote:Hi @LMiranda and @Astinson / @Sadads
At the moment I am not quite able to start on any part of the request. I think everything will be more clear to me in our meeting next week.
Just wanted to comment on the above and to clarify/summarize my assumptions.
- Slide deck
- you will create it the slides
- I polish them and add graphics that you will require (and let me know more details)
- Meta Page
- you will create the page
- I upload graphics to Commons
- Translating design materials
- you will provide the editable files (e.g. .ai or .eps for vector graphics)
- I will either paste graphics in the Google Docs report document
I also noticed that you marked Blog in the task description. What deliverable is that referenced to?
In T225976#5287471, @LMiranda wrote:Hi @hdothiduc I'm so sorry no, I had not seen these earlier messages. I'm also looping in Alex @Sadads as he will be leading the Wikimania presentation.
I have a couple of clarifying questions:
So by "design assets for Wikimania" you mean a slide presentation (like Google Slides) and the Meta Page?
- correct. We need a polished slide deck and graphics for the meta page.
For the most part, we will need to take existing graphics, and transform them into decks -- I don't think this is going to need too much design -- actually with the version of the report that we got via the other designer.
What should be on the Meta Page?
- Likely a couple of key graphics that the other designer is working on for the report, we will want to repurpose some of these to create learning objects out of them. @Sadads what else?
I think we may want to publish some of the tables as extracted individual documents/graphics (as opposed to part of a deep PDF). We may want to extract them from the
Do you already have the text that you will say during the presentation? Do you have more context on the event at Wikimania where you will present?
- We have the draft from the exec presentation but that will be iterated for the public presentation. Alex may have more to say on this.
So we haven't been confirmed on the Wikimania presentation -- but it will be some of the narrative from the Exec presentation plus some specifics targeted for the Wikimedia community.
In T224001#5204015, @99of9 wrote:Or if that is too much, scrap the counts entirely (a percentage is already clear), and add two links with the symbols + and -. So it could end up simply as "86.56% (+ -)"
@Zoranzoki21 do you have an ETA for when this will be available? I am not familiar with how the code gets added/merged.
I am not finding a setting like that for Chrome on Android.... but this
sounds like something we probably want to solve at the platform level right?
I am on a Samsung S7, and was having it with Chrome and Chrome via Gmail on
Android 8.0. I don't have a specific seperate PDF viewer installed.
@Qgil : I think so, and there are already some technical modules which do
the main behaviour here, but aren't integrated into this suite. For
example, https://meta.wikimedia.org/wiki/GLAMorgan
@Doc_James: as its been explained to me, there is no timestamp or queue of
those changes to the "file usage" list -- so its actually a bit more
complex question of logging those changes in a way that isn't super
intensive, one way or another.
The display of the metadata from Wikisource is kindof ugly in that preview: its doing both the title and the file name, and some other stuff.
Does this run into some of the same problems as https://meta.wikimedia.org/wiki/Community_Tech/Edit_summary_length_for_non-Latin_languages ?
It might be worth making it a user option for the wikitext editor.
@AlexMonk-WMF for the 2017 wikitext editor. I can think of a number of circumstances, where my typing in the the wikitext editor, has created a bad template, slow down or interfering with the accuracy of my contributions.
Ahaha, it looks like we need to figure out what the shortcut will be and how it will respond to editors. Thank you for exploring this.
Hi, I saw that this was supported as part of the 2016 Community Wishlist. I wanted to note that there are other applications of data about when images are used, under which revision, including but not limited to, notifying folks of deletion discussions if they have used the media file in their own projects, tracking if mass uploads have been used by users, where the "uploader" may be an institution or bot operator, not the creator themselves, etc. I have outlined a bit more robust way of tracking that kind of data at: T137758 , which could then be used to populate these kinds of notifications.
@Cyberpower678 might want to be tagged on this one.
Yeah, I think I can hack around this for the time being: use the pagepile
to create a report.
@Ottomata: would https://lists.wikimedia.org/pipermail/wikitech-l/2016-September/086617.html be an appropriate strategy for tracking changes in media files?
Ah good, let us know if we can help.
Hey @Cyberpower678 , Whats the status on deploying to Swedish? Was in a meeting with one of our IA contacts, and @Ocaasi_WMF and I want to make sure we have the right information on hand.
Also, another tool that tracks usage of files across wikis: https://tools.wmflabs.org/glamtools/glamorous/ . I think it queries directly to the table that documents reuse in commons.
@Nuria This isn't primarily an outreach.wikimedia problem: most of the files used for GLAMMorgan end up on other community projects, and the tool is creating a ceiling of some sort on the number of requests it runs. I wonder, perhaps, if it has to do with running individual page stats rather than in batches, etc.
Depending on how the data is stored, I could also see people looking at a set of articles on a Wiki, and wanting to look at the file change edits (but that might be a different set of data.)
I think the main user story is something like:
*GLAM donates large number of content items to commons
*GLAM uses a category or pagepile of those Files (or another subsection of content) in a list, and wants to know "Who/When/How" they changed usage of the file
*Plug into tool that set that needs to be searched,
*Get report, so that they can reach out, reward or engage this group
Theoretically, this could be done rather robustly per @Mvolz , once we start sharing structured metadata described per T68108 . @Lydia_Pintscher & the Wikidata team probably will have a better sense if that is tangible -- but if they think structured data on commons is likely in the next year or two, I would not rush into building the Dark Archive, because semi-automatic function are much easier at that point.
@MusikAnimal do you have a sense if there is a limit on the API or the computing for this, that would be prohibitive of this kind of opening up of the tool?
good to know, that delays one of my projects then: which is fine, it looked
like it might have been too early in next quarter anyway, Alex
In T123529#2618881, @Pginer-WMF wrote:In T123529#2199342, @Sadads wrote:In T123529#2199315, @Esanders wrote:
- apply the above to any other edit I make during the same browser session
every edit? what about user talk page edits? or what if I go home and start editing other stuff, how do I clear this persistent auto-hashtag?
Thats a good engineering design question -- I think you would have to be able to remove a hashtag once from an edit summary and it not appear again-- we also might want a time limit on the persistence of the tag within someones browser. For most use cases for a "campaign hashtag", you are mostly working with editors that only participate in a particular window of time and/or around the event and/or campaign.
Another possibility would be to pre-fill the tag the first time (when the cause-effect connection is clear after landing into editing from a campaign-related link), and make it easy for users to add the tag themselves on upcoming edits. If we provide a tag selector that surfaces recent tags, a user may just need to type "#" and hit enter to select the most recent tag (probably about the campaign they are participating in).
In this way, the user discovers the tag the first time and we make it easy to use repeatedly when needed (both in terms of adding it quickly, and not requiring to recall the exact tag name). This approach eliminates the risk of forgetting to remove the tag when it is not appropriate, but introduces the risk of the user to forgetting adding it.
@Nuria Cool! Is there a timeline for this: next couple sprints?
Thanks for making this a phabricator item! Looking forward to the update on how this works!
+1 to @ManosHacker 's strategy on this: the only real use cases are in bulleted or in number lists.
Finally! Yay! Super excited!
Thanks @Nuria thats good to know: community programs and events could
really use the data coming off these wikis.
@kaldari & @Niharika Currently OCR uses external tools to do on depend on Tesseract OCR (see the list of languages at https://github.com/tesseract-ocr/langdata) , when the HOCR text is not available in the object itself (PDFs and/or DJVUs will carry HOCR text with them if available). https://wikisource.org/wiki/MediaWiki:OCR.js .
What are the languages that are currently unsupported by the OCR on WikiSource?
FYI @Harej
As a followup to the comment on T132088#2360795 -- @Catrope we talked to @Quiddity and @jmatazzoni about creating a notification to help encourage folks to get Wikipedia Library access. Do you have a sense of when this could get done?
Sounds great! Thats a bit hacky, but it would make a huge difference for
those of us working in metaspaces
@Multichill yes theoretically any map, but really the stuff that we have a whole lot of data for (paintings for instance, cultural heritage). The idea would be to find some way for the map or graph created by a volunteer, of educational value, out into other websites, theoretically, also with the added value of Wikidata queries or snapshots from Wikidata queries.
Mostly solved with massviews, better conversation/bug at https://phabricator.wikimedia.org/T135437